Dec 13 09:10:43.146095 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024
Dec 13 09:10:43.146134 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 09:10:43.146167 kernel: BIOS-provided physical RAM map:
Dec 13 09:10:43.146178 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 09:10:43.146187 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 09:10:43.146197 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 09:10:43.146209 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Dec 13 09:10:43.146219 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Dec 13 09:10:43.146229 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 09:10:43.146243 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 09:10:43.146260 kernel: NX (Execute Disable) protection: active
Dec 13 09:10:43.146271 kernel: APIC: Static calls initialized
Dec 13 09:10:43.146281 kernel: SMBIOS 2.8 present.
Dec 13 09:10:43.146315 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017
Dec 13 09:10:43.146324 kernel: Hypervisor detected: KVM
Dec 13 09:10:43.146335 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 09:10:43.146347 kernel: kvm-clock: using sched offset of 3812748686 cycles
Dec 13 09:10:43.146355 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 09:10:43.146362 kernel: tsc: Detected 2000.000 MHz processor
Dec 13 09:10:43.146370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 09:10:43.146377 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 09:10:43.146385 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Dec 13 09:10:43.146392 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 09:10:43.146399 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 09:10:43.146410 kernel: ACPI: Early table checksum verification disabled
Dec 13 09:10:43.146417 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS )
Dec 13 09:10:43.146424 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146431 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146439 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146446 kernel: ACPI: FACS 0x000000007FFE0000 000040
Dec 13 09:10:43.146453 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146459 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146467 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146477 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 09:10:43.146484 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd]
Dec 13 09:10:43.146491 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769]
Dec 13 09:10:43.146513 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Dec 13 09:10:43.146520 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d]
Dec 13 09:10:43.146527 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895]
Dec 13 09:10:43.146534 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d]
Dec 13 09:10:43.146552 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985]
Dec 13 09:10:43.146560 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Dec 13 09:10:43.146567 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Dec 13 09:10:43.146575 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Dec 13 09:10:43.146583 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Dec 13 09:10:43.146590 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff]
Dec 13 09:10:43.146598 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff]
Dec 13 09:10:43.146608 kernel: Zone ranges:
Dec 13 09:10:43.146616 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 09:10:43.146623 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffdafff]
Dec 13 09:10:43.146631 kernel:   Normal   empty
Dec 13 09:10:43.146639 kernel: Movable zone start for each node
Dec 13 09:10:43.146646 kernel: Early memory node ranges
Dec 13 09:10:43.146655 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 09:10:43.146662 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Dec 13 09:10:43.146669 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff]
Dec 13 09:10:43.146680 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 09:10:43.146692 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 09:10:43.146699 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges
Dec 13 09:10:43.146706 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 09:10:43.146714 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 09:10:43.146721 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 09:10:43.146729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 09:10:43.146738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 09:10:43.146750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 09:10:43.146765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 09:10:43.146777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 09:10:43.146789 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 09:10:43.146801 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Dec 13 09:10:43.146813 kernel: TSC deadline timer available
Dec 13 09:10:43.146825 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Dec 13 09:10:43.146837 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 09:10:43.146849 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Dec 13 09:10:43.146866 kernel: Booting paravirtualized kernel on KVM
Dec 13 09:10:43.146879 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 09:10:43.146900 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Dec 13 09:10:43.146911 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Dec 13 09:10:43.146922 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Dec 13 09:10:43.146933 kernel: pcpu-alloc: [0] 0 1 
Dec 13 09:10:43.146944 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 13 09:10:43.146959 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 09:10:43.146972 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 09:10:43.146989 kernel: random: crng init done
Dec 13 09:10:43.147001 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 09:10:43.147013 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 09:10:43.147025 kernel: Fallback order for Node 0: 0 
Dec 13 09:10:43.147038 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515803
Dec 13 09:10:43.147051 kernel: Policy zone: DMA32
Dec 13 09:10:43.147063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 09:10:43.147076 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved)
Dec 13 09:10:43.147089 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Dec 13 09:10:43.147101 kernel: Kernel/User page tables isolation: enabled
Dec 13 09:10:43.147108 kernel: ftrace: allocating 37902 entries in 149 pages
Dec 13 09:10:43.147116 kernel: ftrace: allocated 149 pages with 4 groups
Dec 13 09:10:43.147123 kernel: Dynamic Preempt: voluntary
Dec 13 09:10:43.147131 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 09:10:43.147140 kernel: rcu:         RCU event tracing is enabled.
Dec 13 09:10:43.147148 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Dec 13 09:10:43.147155 kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 09:10:43.147163 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 09:10:43.147174 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 09:10:43.147182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 09:10:43.147194 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Dec 13 09:10:43.147206 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Dec 13 09:10:43.147227 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 09:10:43.147239 kernel: Console: colour VGA+ 80x25
Dec 13 09:10:43.147249 kernel: printk: console [tty0] enabled
Dec 13 09:10:43.147260 kernel: printk: console [ttyS0] enabled
Dec 13 09:10:43.147273 kernel: ACPI: Core revision 20230628
Dec 13 09:10:43.147286 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Dec 13 09:10:43.148123 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 09:10:43.148133 kernel: x2apic enabled
Dec 13 09:10:43.148141 kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 09:10:43.148149 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Dec 13 09:10:43.148157 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns
Dec 13 09:10:43.148165 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000)
Dec 13 09:10:43.148172 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Dec 13 09:10:43.148181 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Dec 13 09:10:43.148203 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 09:10:43.148212 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 09:10:43.148221 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 09:10:43.148231 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 09:10:43.148240 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Dec 13 09:10:43.148248 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 09:10:43.148256 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 09:10:43.148265 kernel: MDS: Mitigation: Clear CPU buffers
Dec 13 09:10:43.148274 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 09:10:43.148314 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 09:10:43.148323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 09:10:43.148331 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 09:10:43.148339 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 09:10:43.148348 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 13 09:10:43.148356 kernel: Freeing SMP alternatives memory: 32K
Dec 13 09:10:43.148365 kernel: pid_max: default: 32768 minimum: 301
Dec 13 09:10:43.148373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Dec 13 09:10:43.148385 kernel: landlock: Up and running.
Dec 13 09:10:43.148394 kernel: SELinux:  Initializing.
Dec 13 09:10:43.148414 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 09:10:43.148423 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 09:10:43.148431 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1)
Dec 13 09:10:43.148440 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Dec 13 09:10:43.148448 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Dec 13 09:10:43.148457 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Dec 13 09:10:43.148469 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only.
Dec 13 09:10:43.148477 kernel: signal: max sigframe size: 1776
Dec 13 09:10:43.148485 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 09:10:43.148495 kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 09:10:43.148503 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Dec 13 09:10:43.148511 kernel: smp: Bringing up secondary CPUs ...
Dec 13 09:10:43.148519 kernel: smpboot: x86: Booting SMP configuration:
Dec 13 09:10:43.148531 kernel: .... node  #0, CPUs:      #1
Dec 13 09:10:43.148540 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 09:10:43.148549 kernel: smpboot: Max logical packages: 1
Dec 13 09:10:43.148561 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS)
Dec 13 09:10:43.148570 kernel: devtmpfs: initialized
Dec 13 09:10:43.148578 kernel: x86/mm: Memory block size: 128MB
Dec 13 09:10:43.148587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 09:10:43.148595 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Dec 13 09:10:43.148603 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 09:10:43.148612 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 09:10:43.148620 kernel: audit: initializing netlink subsys (disabled)
Dec 13 09:10:43.148628 kernel: audit: type=2000 audit(1734081041.978:1): state=initialized audit_enabled=0 res=1
Dec 13 09:10:43.148640 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 09:10:43.148649 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 09:10:43.148657 kernel: cpuidle: using governor menu
Dec 13 09:10:43.148665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 09:10:43.148673 kernel: dca service started, version 1.12.1
Dec 13 09:10:43.148681 kernel: PCI: Using configuration type 1 for base access
Dec 13 09:10:43.148690 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 09:10:43.148699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 09:10:43.148707 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 09:10:43.148719 kernel: ACPI: Added _OSI(Module Device)
Dec 13 09:10:43.148727 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 09:10:43.148735 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 09:10:43.148743 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 09:10:43.148752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 09:10:43.148760 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 09:10:43.148769 kernel: ACPI: Interpreter enabled
Dec 13 09:10:43.148777 kernel: ACPI: PM: (supports S0 S5)
Dec 13 09:10:43.148785 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 09:10:43.148797 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 09:10:43.148805 kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 09:10:43.148813 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 13 09:10:43.148822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 09:10:43.149138 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 09:10:43.149260 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Dec 13 09:10:43.149402 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Dec 13 09:10:43.149427 kernel: acpiphp: Slot [3] registered
Dec 13 09:10:43.149438 kernel: acpiphp: Slot [4] registered
Dec 13 09:10:43.149450 kernel: acpiphp: Slot [5] registered
Dec 13 09:10:43.149463 kernel: acpiphp: Slot [6] registered
Dec 13 09:10:43.149475 kernel: acpiphp: Slot [7] registered
Dec 13 09:10:43.149486 kernel: acpiphp: Slot [8] registered
Dec 13 09:10:43.149498 kernel: acpiphp: Slot [9] registered
Dec 13 09:10:43.149510 kernel: acpiphp: Slot [10] registered
Dec 13 09:10:43.149523 kernel: acpiphp: Slot [11] registered
Dec 13 09:10:43.149538 kernel: acpiphp: Slot [12] registered
Dec 13 09:10:43.149550 kernel: acpiphp: Slot [13] registered
Dec 13 09:10:43.149561 kernel: acpiphp: Slot [14] registered
Dec 13 09:10:43.149575 kernel: acpiphp: Slot [15] registered
Dec 13 09:10:43.149587 kernel: acpiphp: Slot [16] registered
Dec 13 09:10:43.149601 kernel: acpiphp: Slot [17] registered
Dec 13 09:10:43.149615 kernel: acpiphp: Slot [18] registered
Dec 13 09:10:43.149628 kernel: acpiphp: Slot [19] registered
Dec 13 09:10:43.149641 kernel: acpiphp: Slot [20] registered
Dec 13 09:10:43.149655 kernel: acpiphp: Slot [21] registered
Dec 13 09:10:43.149668 kernel: acpiphp: Slot [22] registered
Dec 13 09:10:43.149676 kernel: acpiphp: Slot [23] registered
Dec 13 09:10:43.149684 kernel: acpiphp: Slot [24] registered
Dec 13 09:10:43.149693 kernel: acpiphp: Slot [25] registered
Dec 13 09:10:43.149702 kernel: acpiphp: Slot [26] registered
Dec 13 09:10:43.149710 kernel: acpiphp: Slot [27] registered
Dec 13 09:10:43.149726 kernel: acpiphp: Slot [28] registered
Dec 13 09:10:43.149735 kernel: acpiphp: Slot [29] registered
Dec 13 09:10:43.149743 kernel: acpiphp: Slot [30] registered
Dec 13 09:10:43.149754 kernel: acpiphp: Slot [31] registered
Dec 13 09:10:43.149762 kernel: PCI host bridge to bus 0000:00
Dec 13 09:10:43.149920 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 09:10:43.150050 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 09:10:43.150165 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 09:10:43.150267 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Dec 13 09:10:43.150376 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Dec 13 09:10:43.150464 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 09:10:43.150629 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 13 09:10:43.150824 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 13 09:10:43.150976 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Dec 13 09:10:43.151080 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc1e0-0xc1ef]
Dec 13 09:10:43.151212 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 13 09:10:43.152047 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 13 09:10:43.152255 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 13 09:10:43.152428 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 13 09:10:43.152570 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Dec 13 09:10:43.152672 kernel: pci 0000:00:01.2: reg 0x20: [io  0xc180-0xc19f]
Dec 13 09:10:43.152819 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 13 09:10:43.152967 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 13 09:10:43.153072 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 13 09:10:43.153212 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Dec 13 09:10:43.153337 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Dec 13 09:10:43.153435 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 13 09:10:43.153564 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff]
Dec 13 09:10:43.153705 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Dec 13 09:10:43.153848 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 09:10:43.153992 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Dec 13 09:10:43.154108 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc1a0-0xc1bf]
Dec 13 09:10:43.155281 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
Dec 13 09:10:43.155478 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 13 09:10:43.155600 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Dec 13 09:10:43.155725 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc1c0-0xc1df]
Dec 13 09:10:43.155867 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff]
Dec 13 09:10:43.156007 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 13 09:10:43.156176 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
Dec 13 09:10:43.156366 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc100-0xc13f]
Dec 13 09:10:43.156467 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff]
Dec 13 09:10:43.156584 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 13 09:10:43.156728 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000
Dec 13 09:10:43.156875 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc000-0xc07f]
Dec 13 09:10:43.157020 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff]
Dec 13 09:10:43.157124 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 13 09:10:43.157286 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Dec 13 09:10:43.157453 kernel: pci 0000:00:07.0: reg 0x10: [io  0xc080-0xc0ff]
Dec 13 09:10:43.157567 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff]
Dec 13 09:10:43.157670 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref]
Dec 13 09:10:43.157788 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00
Dec 13 09:10:43.157926 kernel: pci 0000:00:08.0: reg 0x10: [io  0xc140-0xc17f]
Dec 13 09:10:43.158028 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref]
Dec 13 09:10:43.158039 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 09:10:43.158049 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 09:10:43.158058 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 09:10:43.158067 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 09:10:43.158080 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 09:10:43.158089 kernel: iommu: Default domain type: Translated
Dec 13 09:10:43.158097 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 09:10:43.158106 kernel: PCI: Using ACPI for IRQ routing
Dec 13 09:10:43.158115 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 09:10:43.158124 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 09:10:43.158132 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Dec 13 09:10:43.158508 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 13 09:10:43.158624 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 13 09:10:43.158803 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 09:10:43.158817 kernel: vgaarb: loaded
Dec 13 09:10:43.158826 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Dec 13 09:10:43.158847 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Dec 13 09:10:43.158856 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 09:10:43.158865 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 09:10:43.158875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 09:10:43.158883 kernel: pnp: PnP ACPI init
Dec 13 09:10:43.158896 kernel: pnp: PnP ACPI: found 4 devices
Dec 13 09:10:43.158925 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 09:10:43.158938 kernel: NET: Registered PF_INET protocol family
Dec 13 09:10:43.158952 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 09:10:43.158965 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Dec 13 09:10:43.158977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 09:10:43.158991 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 09:10:43.159007 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Dec 13 09:10:43.159033 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Dec 13 09:10:43.159050 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 09:10:43.159070 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 09:10:43.159085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 09:10:43.159099 kernel: NET: Registered PF_XDP protocol family
Dec 13 09:10:43.159272 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 09:10:43.159447 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 09:10:43.159576 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 09:10:43.159703 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Dec 13 09:10:43.160095 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Dec 13 09:10:43.160376 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 13 09:10:43.160541 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 09:10:43.160563 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 09:10:43.160694 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38140 usecs
Dec 13 09:10:43.160711 kernel: PCI: CLS 0 bytes, default 64
Dec 13 09:10:43.160720 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Dec 13 09:10:43.160730 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns
Dec 13 09:10:43.160739 kernel: Initialise system trusted keyrings
Dec 13 09:10:43.160755 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Dec 13 09:10:43.160763 kernel: Key type asymmetric registered
Dec 13 09:10:43.160772 kernel: Asymmetric key parser 'x509' registered
Dec 13 09:10:43.160780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Dec 13 09:10:43.160789 kernel: io scheduler mq-deadline registered
Dec 13 09:10:43.160797 kernel: io scheduler kyber registered
Dec 13 09:10:43.160806 kernel: io scheduler bfq registered
Dec 13 09:10:43.160814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 09:10:43.160822 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 13 09:10:43.160834 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 09:10:43.160842 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 09:10:43.160850 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 09:10:43.160859 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 09:10:43.160867 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 09:10:43.160876 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 09:10:43.160889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 09:10:43.161073 kernel: rtc_cmos 00:03: RTC can wake from S4
Dec 13 09:10:43.161090 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 09:10:43.161183 kernel: rtc_cmos 00:03: registered as rtc0
Dec 13 09:10:43.161318 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:10:42 UTC (1734081042)
Dec 13 09:10:43.161408 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Dec 13 09:10:43.161420 kernel: intel_pstate: CPU model not supported
Dec 13 09:10:43.161429 kernel: NET: Registered PF_INET6 protocol family
Dec 13 09:10:43.161437 kernel: Segment Routing with IPv6
Dec 13 09:10:43.161446 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 09:10:43.161454 kernel: NET: Registered PF_PACKET protocol family
Dec 13 09:10:43.161466 kernel: Key type dns_resolver registered
Dec 13 09:10:43.161475 kernel: IPI shorthand broadcast: enabled
Dec 13 09:10:43.161486 kernel: sched_clock: Marking stable (1402012721, 168742023)->(1647392852, -76638108)
Dec 13 09:10:43.161499 kernel: registered taskstats version 1
Dec 13 09:10:43.161512 kernel: Loading compiled-in X.509 certificates
Dec 13 09:10:43.161524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0'
Dec 13 09:10:43.161536 kernel: Key type .fscrypt registered
Dec 13 09:10:43.161549 kernel: Key type fscrypt-provisioning registered
Dec 13 09:10:43.161562 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 09:10:43.161578 kernel: ima: Allocated hash algorithm: sha1
Dec 13 09:10:43.163390 kernel: ima: No architecture policies found
Dec 13 09:10:43.163402 kernel: clk: Disabling unused clocks
Dec 13 09:10:43.163412 kernel: Freeing unused kernel image (initmem) memory: 42844K
Dec 13 09:10:43.163421 kernel: Write protecting the kernel read-only data: 36864k
Dec 13 09:10:43.163453 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K
Dec 13 09:10:43.163465 kernel: Run /init as init process
Dec 13 09:10:43.163473 kernel:   with arguments:
Dec 13 09:10:43.163483 kernel:     /init
Dec 13 09:10:43.163495 kernel:   with environment:
Dec 13 09:10:43.163503 kernel:     HOME=/
Dec 13 09:10:43.163512 kernel:     TERM=linux
Dec 13 09:10:43.163520 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 09:10:43.163539 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 09:10:43.163557 systemd[1]: Detected virtualization kvm.
Dec 13 09:10:43.163573 systemd[1]: Detected architecture x86-64.
Dec 13 09:10:43.163588 systemd[1]: Running in initrd.
Dec 13 09:10:43.163608 systemd[1]: No hostname configured, using default hostname.
Dec 13 09:10:43.163622 systemd[1]: Hostname set to <localhost>.
Dec 13 09:10:43.163637 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 09:10:43.163652 systemd[1]: Queued start job for default target initrd.target.
Dec 13 09:10:43.163662 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 09:10:43.163672 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 09:10:43.163749 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Dec 13 09:10:43.163764 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 09:10:43.163796 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Dec 13 09:10:43.163805 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Dec 13 09:10:43.163823 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Dec 13 09:10:43.163838 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Dec 13 09:10:43.163852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 09:10:43.163866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 09:10:43.163885 systemd[1]: Reached target paths.target - Path Units.
Dec 13 09:10:43.163898 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 09:10:43.163913 systemd[1]: Reached target swap.target - Swaps.
Dec 13 09:10:43.163930 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 09:10:43.163944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 09:10:43.163958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 09:10:43.163975 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Dec 13 09:10:43.163988 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Dec 13 09:10:43.164002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 09:10:43.164016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 09:10:43.164030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 09:10:43.164044 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 09:10:43.164060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Dec 13 09:10:43.164075 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 09:10:43.164094 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Dec 13 09:10:43.164108 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 09:10:43.164124 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 09:10:43.164138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 09:10:43.164153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:43.164240 systemd-journald[182]: Collecting audit messages is disabled.
Dec 13 09:10:43.165351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Dec 13 09:10:43.165394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 09:10:43.165416 systemd-journald[182]: Journal started
Dec 13 09:10:43.165464 systemd-journald[182]: Runtime Journal (/run/log/journal/f26f9c604b454a8b98bd33f6e5163bb6) is 4.9M, max 39.3M, 34.4M free.
Dec 13 09:10:43.170597 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 09:10:43.174648 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 09:10:43.179989 systemd-modules-load[183]: Inserted module 'overlay'
Dec 13 09:10:43.242468 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 09:10:43.242527 kernel: Bridge firewalling registered
Dec 13 09:10:43.192693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Dec 13 09:10:43.226745 systemd-modules-load[183]: Inserted module 'br_netfilter'
Dec 13 09:10:43.250783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 09:10:43.252446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 09:10:43.269757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:43.272627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Dec 13 09:10:43.282716 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 09:10:43.285976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 09:10:43.289598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 09:10:43.292940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 09:10:43.323817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 09:10:43.331979 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Dec 13 09:10:43.334859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 09:10:43.336139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 09:10:43.360778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 09:10:43.376627 dracut-cmdline[214]: dracut-dracut-053
Dec 13 09:10:43.380706 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 09:10:43.422375 systemd-resolved[217]: Positive Trust Anchors:
Dec 13 09:10:43.423566 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 09:10:43.423621 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 09:10:43.432543 systemd-resolved[217]: Defaulting to hostname 'linux'.
Dec 13 09:10:43.434748 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 09:10:43.436887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 09:10:43.513377 kernel: SCSI subsystem initialized
Dec 13 09:10:43.529353 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 09:10:43.545806 kernel: iscsi: registered transport (tcp)
Dec 13 09:10:43.574781 kernel: iscsi: registered transport (qla4xxx)
Dec 13 09:10:43.574903 kernel: QLogic iSCSI HBA Driver
Dec 13 09:10:43.659579 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Dec 13 09:10:43.667729 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Dec 13 09:10:43.715706 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 09:10:43.715873 kernel: device-mapper: uevent: version 1.0.3
Dec 13 09:10:43.715891 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Dec 13 09:10:43.783387 kernel: raid6: avx2x4   gen() 23238 MB/s
Dec 13 09:10:43.800386 kernel: raid6: avx2x2   gen() 22166 MB/s
Dec 13 09:10:43.817746 kernel: raid6: avx2x1   gen() 19092 MB/s
Dec 13 09:10:43.817880 kernel: raid6: using algorithm avx2x4 gen() 23238 MB/s
Dec 13 09:10:43.837422 kernel: raid6: .... xor() 7142 MB/s, rmw enabled
Dec 13 09:10:43.837535 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 09:10:43.874349 kernel: xor: automatically using best checksumming function   avx       
Dec 13 09:10:44.104239 kernel: Btrfs loaded, zoned=no, fsverity=no
Dec 13 09:10:44.123213 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 09:10:44.129665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 09:10:44.152984 systemd-udevd[400]: Using default interface naming scheme 'v255'.
Dec 13 09:10:44.160242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 09:10:44.168303 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Dec 13 09:10:44.190653 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation
Dec 13 09:10:44.235838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 09:10:44.243724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 09:10:44.313427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 09:10:44.322924 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Dec 13 09:10:44.358018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Dec 13 09:10:44.361862 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 09:10:44.362706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 09:10:44.366106 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 09:10:44.373554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Dec 13 09:10:44.409390 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 09:10:44.450320 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues
Dec 13 09:10:44.535124 kernel: libata version 3.00 loaded.
Dec 13 09:10:44.535163 kernel: ata_piix 0000:00:01.1: version 2.13
Dec 13 09:10:44.555668 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 09:10:44.555698 kernel: scsi host1: ata_piix
Dec 13 09:10:44.555969 kernel: scsi host0: Virtio SCSI HBA
Dec 13 09:10:44.556163 kernel: scsi host2: ata_piix
Dec 13 09:10:44.556430 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14
Dec 13 09:10:44.556452 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15
Dec 13 09:10:44.556468 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB)
Dec 13 09:10:44.556652 kernel: ACPI: bus type USB registered
Dec 13 09:10:44.556673 kernel: usbcore: registered new interface driver usbfs
Dec 13 09:10:44.556731 kernel: usbcore: registered new interface driver hub
Dec 13 09:10:44.556750 kernel: usbcore: registered new device driver usb
Dec 13 09:10:44.556781 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 09:10:44.556799 kernel: GPT:9289727 != 125829119
Dec 13 09:10:44.556815 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 09:10:44.556831 kernel: GPT:9289727 != 125829119
Dec 13 09:10:44.556847 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 09:10:44.556862 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 09:10:44.556879 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues
Dec 13 09:10:44.557104 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB)
Dec 13 09:10:44.519777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 09:10:44.519964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 09:10:44.540553 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 09:10:44.541642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 09:10:44.541916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:44.542707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:44.569434 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 09:10:44.569475 kernel: AES CTR mode by8 optimization enabled
Dec 13 09:10:44.552929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:44.647536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:44.652603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 09:10:44.713855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 09:10:44.745642 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (449)
Dec 13 09:10:44.745672 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (444)
Dec 13 09:10:44.753046 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Dec 13 09:10:44.762187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Dec 13 09:10:44.770121 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 13 09:10:44.771338 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 13 09:10:44.771496 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 13 09:10:44.771663 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180
Dec 13 09:10:44.771861 kernel: hub 1-0:1.0: USB hub found
Dec 13 09:10:44.772180 kernel: hub 1-0:1.0: 2 ports detected
Dec 13 09:10:44.781080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Dec 13 09:10:44.781822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Dec 13 09:10:44.789124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Dec 13 09:10:44.797511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Dec 13 09:10:44.808856 disk-uuid[547]: Primary Header is updated.
Dec 13 09:10:44.808856 disk-uuid[547]: Secondary Entries is updated.
Dec 13 09:10:44.808856 disk-uuid[547]: Secondary Header is updated.
Dec 13 09:10:44.815331 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 09:10:44.825333 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 09:10:44.832360 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 09:10:45.836202 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 09:10:45.836286 disk-uuid[549]: The operation has completed successfully.
Dec 13 09:10:45.894576 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 09:10:45.894735 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Dec 13 09:10:45.919762 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Dec 13 09:10:45.926042 sh[562]: Success
Dec 13 09:10:45.948360 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 09:10:46.045140 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Dec 13 09:10:46.060518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Dec 13 09:10:46.067164 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Dec 13 09:10:46.101797 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be
Dec 13 09:10:46.101914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Dec 13 09:10:46.101933 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Dec 13 09:10:46.106439 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Dec 13 09:10:46.106801 kernel: BTRFS info (device dm-0): using free space tree
Dec 13 09:10:46.120908 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Dec 13 09:10:46.122730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Dec 13 09:10:46.131976 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Dec 13 09:10:46.137591 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Dec 13 09:10:46.157348 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 09:10:46.157462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 09:10:46.157476 kernel: BTRFS info (device vda6): using free space tree
Dec 13 09:10:46.163341 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 09:10:46.186078 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 09:10:46.188092 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 09:10:46.197440 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Dec 13 09:10:46.207714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Dec 13 09:10:46.342264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 09:10:46.354729 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 09:10:46.388602 ignition[654]: Ignition 2.19.0
Dec 13 09:10:46.388622 ignition[654]: Stage: fetch-offline
Dec 13 09:10:46.388704 ignition[654]: no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:46.392116 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 09:10:46.388719 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:46.388896 ignition[654]: parsed url from cmdline: ""
Dec 13 09:10:46.388901 ignition[654]: no config URL provided
Dec 13 09:10:46.388910 ignition[654]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 09:10:46.388922 ignition[654]: no config at "/usr/lib/ignition/user.ign"
Dec 13 09:10:46.388931 ignition[654]: failed to fetch config: resource requires networking
Dec 13 09:10:46.389277 ignition[654]: Ignition finished successfully
Dec 13 09:10:46.413459 systemd-networkd[748]: lo: Link UP
Dec 13 09:10:46.413475 systemd-networkd[748]: lo: Gained carrier
Dec 13 09:10:46.417119 systemd-networkd[748]: Enumeration completed
Dec 13 09:10:46.417356 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 09:10:46.419079 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Dec 13 09:10:46.419086 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network.
Dec 13 09:10:46.420865 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 09:10:46.420870 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 09:10:46.420917 systemd[1]: Reached target network.target - Network.
Dec 13 09:10:46.421769 systemd-networkd[748]: eth0: Link UP
Dec 13 09:10:46.421776 systemd-networkd[748]: eth0: Gained carrier
Dec 13 09:10:46.421789 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Dec 13 09:10:46.427661 systemd-networkd[748]: eth1: Link UP
Dec 13 09:10:46.427673 systemd-networkd[748]: eth1: Gained carrier
Dec 13 09:10:46.427698 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 09:10:46.430659 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Dec 13 09:10:46.440428 systemd-networkd[748]: eth0: DHCPv4 address 165.232.145.99/20, gateway 165.232.144.1 acquired from 169.254.169.253
Dec 13 09:10:46.444520 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.7/20, gateway 10.124.0.1 acquired from 169.254.169.253
Dec 13 09:10:46.461675 ignition[754]: Ignition 2.19.0
Dec 13 09:10:46.461698 ignition[754]: Stage: fetch
Dec 13 09:10:46.462031 ignition[754]: no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:46.462047 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:46.462341 ignition[754]: parsed url from cmdline: ""
Dec 13 09:10:46.462346 ignition[754]: no config URL provided
Dec 13 09:10:46.462352 ignition[754]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 09:10:46.462364 ignition[754]: no config at "/usr/lib/ignition/user.ign"
Dec 13 09:10:46.462389 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1
Dec 13 09:10:46.512755 ignition[754]: GET result: OK
Dec 13 09:10:46.513802 ignition[754]: parsing config with SHA512: bf2f55fb6d75dc1a4258c364a2efb9fdddacc86901c6a3c78266c012d69616cfa672677b851e1e4026d6dcae1c51b51395eb19268681caf357965b26bd3694e0
Dec 13 09:10:46.520407 unknown[754]: fetched base config from "system"
Dec 13 09:10:46.520426 unknown[754]: fetched base config from "system"
Dec 13 09:10:46.521465 ignition[754]: fetch: fetch complete
Dec 13 09:10:46.520437 unknown[754]: fetched user config from "digitalocean"
Dec 13 09:10:46.521472 ignition[754]: fetch: fetch passed
Dec 13 09:10:46.525511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Dec 13 09:10:46.521565 ignition[754]: Ignition finished successfully
Dec 13 09:10:46.535692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Dec 13 09:10:46.566719 ignition[762]: Ignition 2.19.0
Dec 13 09:10:46.566735 ignition[762]: Stage: kargs
Dec 13 09:10:46.566989 ignition[762]: no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:46.567001 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:46.568410 ignition[762]: kargs: kargs passed
Dec 13 09:10:46.569814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Dec 13 09:10:46.568489 ignition[762]: Ignition finished successfully
Dec 13 09:10:46.588837 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Dec 13 09:10:46.610874 ignition[769]: Ignition 2.19.0
Dec 13 09:10:46.610894 ignition[769]: Stage: disks
Dec 13 09:10:46.611190 ignition[769]: no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:46.611204 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:46.612657 ignition[769]: disks: disks passed
Dec 13 09:10:46.613942 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Dec 13 09:10:46.612734 ignition[769]: Ignition finished successfully
Dec 13 09:10:46.620622 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Dec 13 09:10:46.621776 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Dec 13 09:10:46.623611 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 09:10:46.624938 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 09:10:46.626695 systemd[1]: Reached target basic.target - Basic System.
Dec 13 09:10:46.643694 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Dec 13 09:10:46.664919 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Dec 13 09:10:46.669805 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Dec 13 09:10:46.678464 systemd[1]: Mounting sysroot.mount - /sysroot...
Dec 13 09:10:46.817955 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none.
Dec 13 09:10:46.819847 systemd[1]: Mounted sysroot.mount - /sysroot.
Dec 13 09:10:46.821474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Dec 13 09:10:46.838111 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 09:10:46.844541 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Dec 13 09:10:46.849582 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent...
Dec 13 09:10:46.855893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785)
Dec 13 09:10:46.855932 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 09:10:46.862849 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 09:10:46.862947 kernel: BTRFS info (device vda6): using free space tree
Dec 13 09:10:46.865766 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Dec 13 09:10:46.873688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 09:10:46.880067 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 09:10:46.873827 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 09:10:46.884942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 09:10:46.885904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Dec 13 09:10:46.908849 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Dec 13 09:10:46.999340 coreos-metadata[787]: Dec 13 09:10:46.993 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Dec 13 09:10:47.008901 coreos-metadata[787]: Dec 13 09:10:47.008 INFO Fetch successful
Dec 13 09:10:47.016632 coreos-metadata[788]: Dec 13 09:10:47.016 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Dec 13 09:10:47.019616 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully.
Dec 13 09:10:47.023536 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 09:10:47.019888 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent.
Dec 13 09:10:47.031105 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory
Dec 13 09:10:47.033773 coreos-metadata[788]: Dec 13 09:10:47.033 INFO Fetch successful
Dec 13 09:10:47.043773 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 09:10:47.045392 coreos-metadata[788]: Dec 13 09:10:47.044 INFO wrote hostname ci-4081.2.1-7-516c4b3017 to /sysroot/etc/hostname
Dec 13 09:10:47.046594 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 09:10:47.064500 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 09:10:47.246139 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Dec 13 09:10:47.257691 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Dec 13 09:10:47.260855 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Dec 13 09:10:47.296463 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Dec 13 09:10:47.297785 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 09:10:47.333420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Dec 13 09:10:47.354006 ignition[906]: INFO     : Ignition 2.19.0
Dec 13 09:10:47.358536 ignition[906]: INFO     : Stage: mount
Dec 13 09:10:47.358536 ignition[906]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:47.358536 ignition[906]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:47.363021 ignition[906]: INFO     : mount: mount passed
Dec 13 09:10:47.363021 ignition[906]: INFO     : Ignition finished successfully
Dec 13 09:10:47.362287 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Dec 13 09:10:47.372567 systemd[1]: Starting ignition-files.service - Ignition (files)...
Dec 13 09:10:47.410745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 09:10:47.426728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917)
Dec 13 09:10:47.440385 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 09:10:47.440500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 09:10:47.440522 kernel: BTRFS info (device vda6): using free space tree
Dec 13 09:10:47.447583 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 09:10:47.450381 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 09:10:47.495362 ignition[933]: INFO     : Ignition 2.19.0
Dec 13 09:10:47.495362 ignition[933]: INFO     : Stage: files
Dec 13 09:10:47.497152 ignition[933]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:47.497152 ignition[933]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:47.499184 ignition[933]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 09:10:47.499992 ignition[933]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 09:10:47.499992 ignition[933]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 09:10:47.505433 ignition[933]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 09:10:47.506749 ignition[933]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 09:10:47.506749 ignition[933]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 09:10:47.506446 unknown[933]: wrote ssh authorized keys file for user: core
Dec 13 09:10:47.510750 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 09:10:47.510750 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 09:10:47.553586 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 09:10:47.583577 systemd-networkd[748]: eth1: Gained IPv6LL
Dec 13 09:10:47.633743 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 09:10:47.635675 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 09:10:47.653752 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 09:10:47.653752 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 09:10:47.653752 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 09:10:47.653752 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 09:10:47.653752 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1
Dec 13 09:10:47.653473 systemd-networkd[748]: eth0: Gained IPv6LL
Dec 13 09:10:48.147141 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Dec 13 09:10:48.641682 ignition[933]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 09:10:48.644113 ignition[933]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Dec 13 09:10:48.647021 ignition[933]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 09:10:48.648272 ignition[933]: INFO     : files: files passed
Dec 13 09:10:48.648272 ignition[933]: INFO     : Ignition finished successfully
Dec 13 09:10:48.650916 systemd[1]: Finished ignition-files.service - Ignition (files).
Dec 13 09:10:48.661831 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Dec 13 09:10:48.674898 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Dec 13 09:10:48.689709 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 09:10:48.690323 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Dec 13 09:10:48.716656 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 09:10:48.716656 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 09:10:48.719141 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 09:10:48.730109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 09:10:48.732128 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Dec 13 09:10:48.761639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Dec 13 09:10:48.826408 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 09:10:48.826651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Dec 13 09:10:48.830738 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Dec 13 09:10:48.831489 systemd[1]: Reached target initrd.target - Initrd Default Target.
Dec 13 09:10:48.832251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Dec 13 09:10:48.843568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Dec 13 09:10:48.892264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 09:10:48.903953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Dec 13 09:10:48.928575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Dec 13 09:10:48.929848 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 09:10:48.931844 systemd[1]: Stopped target timers.target - Timer Units.
Dec 13 09:10:48.934003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 09:10:48.934459 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 09:10:48.936991 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Dec 13 09:10:48.937927 systemd[1]: Stopped target basic.target - Basic System.
Dec 13 09:10:48.940356 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Dec 13 09:10:48.942569 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 09:10:48.944348 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Dec 13 09:10:48.946494 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Dec 13 09:10:48.948155 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 09:10:48.949142 systemd[1]: Stopped target sysinit.target - System Initialization.
Dec 13 09:10:48.950845 systemd[1]: Stopped target local-fs.target - Local File Systems.
Dec 13 09:10:48.952928 systemd[1]: Stopped target swap.target - Swaps.
Dec 13 09:10:48.954460 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 09:10:48.954657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 09:10:48.956511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Dec 13 09:10:48.957388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 09:10:48.959280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Dec 13 09:10:48.959542 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 09:10:48.960828 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 09:10:48.961053 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Dec 13 09:10:48.963193 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 09:10:48.963479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 09:10:48.967903 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 09:10:48.968133 systemd[1]: Stopped ignition-files.service - Ignition (files).
Dec 13 09:10:48.968968 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Dec 13 09:10:48.969096 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 09:10:48.978436 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Dec 13 09:10:48.979243 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 09:10:48.979543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 09:10:48.991725 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Dec 13 09:10:48.992504 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 09:10:48.992748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 09:10:48.998862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 09:10:48.999057 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 09:10:49.013783 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 09:10:49.013963 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Dec 13 09:10:49.022953 ignition[988]: INFO     : Ignition 2.19.0
Dec 13 09:10:49.022953 ignition[988]: INFO     : Stage: umount
Dec 13 09:10:49.026368 ignition[988]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 09:10:49.026368 ignition[988]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Dec 13 09:10:49.026368 ignition[988]: INFO     : umount: umount passed
Dec 13 09:10:49.026368 ignition[988]: INFO     : Ignition finished successfully
Dec 13 09:10:49.030788 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 09:10:49.031688 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Dec 13 09:10:49.034520 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 09:10:49.034681 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Dec 13 09:10:49.045618 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 09:10:49.045755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Dec 13 09:10:49.046722 systemd[1]: ignition-fetch.service: Deactivated successfully.
Dec 13 09:10:49.046825 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Dec 13 09:10:49.047895 systemd[1]: Stopped target network.target - Network.
Dec 13 09:10:49.050513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 09:10:49.050632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 09:10:49.053816 systemd[1]: Stopped target paths.target - Path Units.
Dec 13 09:10:49.054939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 09:10:49.058464 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 09:10:49.059616 systemd[1]: Stopped target slices.target - Slice Units.
Dec 13 09:10:49.060906 systemd[1]: Stopped target sockets.target - Socket Units.
Dec 13 09:10:49.062780 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 09:10:49.062872 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 09:10:49.065541 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 09:10:49.065603 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 09:10:49.066703 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 09:10:49.066813 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Dec 13 09:10:49.067642 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Dec 13 09:10:49.067727 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Dec 13 09:10:49.072671 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Dec 13 09:10:49.076375 systemd-networkd[748]: eth0: DHCPv6 lease lost
Dec 13 09:10:49.077702 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Dec 13 09:10:49.084945 systemd-networkd[748]: eth1: DHCPv6 lease lost
Dec 13 09:10:49.085908 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 09:10:49.087443 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 09:10:49.087877 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Dec 13 09:10:49.092522 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 09:10:49.092705 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Dec 13 09:10:49.105193 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 09:10:49.105430 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Dec 13 09:10:49.119490 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 09:10:49.119586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 09:10:49.121006 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 09:10:49.121248 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Dec 13 09:10:49.141715 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Dec 13 09:10:49.146047 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 09:10:49.146199 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 09:10:49.147059 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 09:10:49.147143 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Dec 13 09:10:49.147908 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 09:10:49.147986 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Dec 13 09:10:49.148774 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 09:10:49.148851 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 09:10:49.149884 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 09:10:49.181527 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 09:10:49.181810 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 09:10:49.185141 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 09:10:49.185305 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Dec 13 09:10:49.189753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 09:10:49.189886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Dec 13 09:10:49.192481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 09:10:49.192564 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 09:10:49.197067 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 09:10:49.197197 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 09:10:49.198803 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 09:10:49.198897 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Dec 13 09:10:49.200380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 09:10:49.200472 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 09:10:49.224706 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Dec 13 09:10:49.225421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 09:10:49.225523 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 09:10:49.228166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 09:10:49.228256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:49.260908 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 09:10:49.262913 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Dec 13 09:10:49.266059 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Dec 13 09:10:49.299245 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Dec 13 09:10:49.321213 systemd[1]: Switching root.
Dec 13 09:10:49.384562 systemd-journald[182]: Journal stopped
Dec 13 09:10:51.626397 systemd-journald[182]: Received SIGTERM from PID 1 (systemd).
Dec 13 09:10:51.626538 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 09:10:51.626561 kernel: SELinux:  policy capability open_perms=1
Dec 13 09:10:51.626577 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 09:10:51.626592 kernel: SELinux:  policy capability always_check_network=0
Dec 13 09:10:51.626608 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 09:10:51.626626 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 09:10:51.626642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 09:10:51.626678 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 09:10:51.626695 kernel: audit: type=1403 audit(1734081049.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 09:10:51.626847 systemd[1]: Successfully loaded SELinux policy in 75.345ms.
Dec 13 09:10:51.626958 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.486ms.
Dec 13 09:10:51.626992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 09:10:51.627012 systemd[1]: Detected virtualization kvm.
Dec 13 09:10:51.627031 systemd[1]: Detected architecture x86-64.
Dec 13 09:10:51.627049 systemd[1]: Detected first boot.
Dec 13 09:10:51.627074 systemd[1]: Hostname set to <ci-4081.2.1-7-516c4b3017>.
Dec 13 09:10:51.627093 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 09:10:51.627111 zram_generator::config[1032]: No configuration found.
Dec 13 09:10:51.627129 systemd[1]: Populated /etc with preset unit settings.
Dec 13 09:10:51.627146 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 09:10:51.627164 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Dec 13 09:10:51.627182 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 09:10:51.627406 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Dec 13 09:10:51.627440 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Dec 13 09:10:51.627462 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Dec 13 09:10:51.627512 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Dec 13 09:10:51.627534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Dec 13 09:10:51.627555 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Dec 13 09:10:51.627577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Dec 13 09:10:51.627604 systemd[1]: Created slice user.slice - User and Session Slice.
Dec 13 09:10:51.627623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 09:10:51.627643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 09:10:51.627676 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Dec 13 09:10:51.627694 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Dec 13 09:10:51.627720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Dec 13 09:10:51.627739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 09:10:51.627759 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Dec 13 09:10:51.627779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 09:10:51.627798 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Dec 13 09:10:51.627824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Dec 13 09:10:51.627844 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Dec 13 09:10:51.627865 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Dec 13 09:10:51.627882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 09:10:51.627900 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 09:10:51.627920 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 09:10:51.627939 systemd[1]: Reached target swap.target - Swaps.
Dec 13 09:10:51.627958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Dec 13 09:10:51.627983 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Dec 13 09:10:51.628001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 09:10:51.628022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 09:10:51.628040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 09:10:51.628057 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Dec 13 09:10:51.628073 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Dec 13 09:10:51.628089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Dec 13 09:10:51.628106 systemd[1]: Mounting media.mount - External Media Directory...
Dec 13 09:10:51.628134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:51.628158 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Dec 13 09:10:51.628178 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Dec 13 09:10:51.628198 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Dec 13 09:10:51.628218 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 09:10:51.628238 systemd[1]: Reached target machines.target - Containers.
Dec 13 09:10:51.628257 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Dec 13 09:10:51.628280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 09:10:51.630437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 09:10:51.630483 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Dec 13 09:10:51.630515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 09:10:51.630532 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 09:10:51.630551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 09:10:51.630570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Dec 13 09:10:51.630585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 09:10:51.630603 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 09:10:51.630619 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 09:10:51.630655 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Dec 13 09:10:51.630677 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 09:10:51.630694 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 09:10:51.630712 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 09:10:51.630728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 09:10:51.630745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Dec 13 09:10:51.630764 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Dec 13 09:10:51.630781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 09:10:51.630801 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 09:10:51.630819 systemd[1]: Stopped verity-setup.service.
Dec 13 09:10:51.630841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:51.630864 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Dec 13 09:10:51.630882 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Dec 13 09:10:51.630901 systemd[1]: Mounted media.mount - External Media Directory.
Dec 13 09:10:51.630920 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Dec 13 09:10:51.630939 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Dec 13 09:10:51.630957 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Dec 13 09:10:51.630979 kernel: loop: module loaded
Dec 13 09:10:51.630998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 09:10:51.631016 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 09:10:51.631034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Dec 13 09:10:51.631113 systemd-journald[1101]: Collecting audit messages is disabled.
Dec 13 09:10:51.631157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 09:10:51.631180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 09:10:51.631200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 09:10:51.631222 systemd-journald[1101]: Journal started
Dec 13 09:10:51.631271 systemd-journald[1101]: Runtime Journal (/run/log/journal/f26f9c604b454a8b98bd33f6e5163bb6) is 4.9M, max 39.3M, 34.4M free.
Dec 13 09:10:50.966810 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 09:10:51.027573 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Dec 13 09:10:51.028775 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 09:10:51.639488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 09:10:51.639622 kernel: fuse: init (API version 7.39)
Dec 13 09:10:51.650337 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 09:10:51.650166 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 09:10:51.650481 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Dec 13 09:10:51.651751 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 09:10:51.651970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 09:10:51.653196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 09:10:51.654775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Dec 13 09:10:51.682567 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Dec 13 09:10:51.691791 systemd[1]: Reached target network-pre.target - Preparation for Network.
Dec 13 09:10:51.702519 kernel: ACPI: bus type drm_connector registered
Dec 13 09:10:51.706786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Dec 13 09:10:51.717458 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Dec 13 09:10:51.720575 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 09:10:51.720641 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 09:10:51.726475 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Dec 13 09:10:51.739040 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Dec 13 09:10:51.747615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Dec 13 09:10:51.748625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 09:10:51.756579 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Dec 13 09:10:51.768741 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Dec 13 09:10:51.770219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 09:10:51.775638 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Dec 13 09:10:51.776722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 09:10:51.780094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 09:10:51.789614 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Dec 13 09:10:51.798837 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Dec 13 09:10:51.824397 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 09:10:51.824699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 09:10:51.826973 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Dec 13 09:10:51.829955 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Dec 13 09:10:51.839415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Dec 13 09:10:51.872780 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Dec 13 09:10:51.903932 systemd-journald[1101]: Time spent on flushing to /var/log/journal/f26f9c604b454a8b98bd33f6e5163bb6 is 166.714ms for 987 entries.
Dec 13 09:10:51.903932 systemd-journald[1101]: System Journal (/var/log/journal/f26f9c604b454a8b98bd33f6e5163bb6) is 8.0M, max 195.6M, 187.6M free.
Dec 13 09:10:52.112760 systemd-journald[1101]: Received client request to flush runtime journal.
Dec 13 09:10:52.112841 kernel: loop0: detected capacity change from 0 to 140768
Dec 13 09:10:52.112877 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 09:10:52.112912 kernel: loop1: detected capacity change from 0 to 8
Dec 13 09:10:52.112935 kernel: loop2: detected capacity change from 0 to 205544
Dec 13 09:10:51.954565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 09:10:51.962043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Dec 13 09:10:51.977575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Dec 13 09:10:51.997596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Dec 13 09:10:52.105995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 09:10:52.109827 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Dec 13 09:10:52.111647 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Dec 13 09:10:52.129033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 09:10:52.133179 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 09:10:52.139565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Dec 13 09:10:52.159366 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Dec 13 09:10:52.190343 kernel: loop3: detected capacity change from 0 to 142488
Dec 13 09:10:52.223910 systemd-tmpfiles[1169]: ACLs are not supported, ignoring.
Dec 13 09:10:52.226398 systemd-tmpfiles[1169]: ACLs are not supported, ignoring.
Dec 13 09:10:52.229317 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 09:10:52.253799 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 09:10:52.281343 kernel: loop4: detected capacity change from 0 to 140768
Dec 13 09:10:52.312329 kernel: loop5: detected capacity change from 0 to 8
Dec 13 09:10:52.321353 kernel: loop6: detected capacity change from 0 to 205544
Dec 13 09:10:52.353373 kernel: loop7: detected capacity change from 0 to 142488
Dec 13 09:10:52.378544 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'.
Dec 13 09:10:52.379995 (sd-merge)[1178]: Merged extensions into '/usr'.
Dec 13 09:10:52.395981 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)...
Dec 13 09:10:52.396009 systemd[1]: Reloading...
Dec 13 09:10:52.570446 zram_generator::config[1201]: No configuration found.
Dec 13 09:10:53.007512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 09:10:53.034332 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 09:10:53.095450 systemd[1]: Reloading finished in 698 ms.
Dec 13 09:10:53.128876 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Dec 13 09:10:53.130677 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Dec 13 09:10:53.146629 systemd[1]: Starting ensure-sysext.service...
Dec 13 09:10:53.158790 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 09:10:53.184613 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)...
Dec 13 09:10:53.184642 systemd[1]: Reloading...
Dec 13 09:10:53.236785 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 09:10:53.239739 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Dec 13 09:10:53.243717 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 09:10:53.248127 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Dec 13 09:10:53.313989 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Dec 13 09:10:53.332398 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 09:10:53.332421 systemd-tmpfiles[1248]: Skipping /boot
Dec 13 09:10:53.338772 zram_generator::config[1273]: No configuration found.
Dec 13 09:10:53.378654 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 09:10:53.378677 systemd-tmpfiles[1248]: Skipping /boot
Dec 13 09:10:53.621917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 09:10:53.703249 systemd[1]: Reloading finished in 518 ms.
Dec 13 09:10:53.728748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Dec 13 09:10:53.735327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 09:10:53.753678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Dec 13 09:10:53.759373 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Dec 13 09:10:53.767757 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Dec 13 09:10:53.776663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 09:10:53.788724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 09:10:53.808592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Dec 13 09:10:53.826611 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Dec 13 09:10:53.830986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.833429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 09:10:53.844841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 09:10:53.853793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 09:10:53.863804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 09:10:53.865920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 09:10:53.866202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.872038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.873444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 09:10:53.873849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 09:10:53.874117 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.879801 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.880179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 09:10:53.891794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 09:10:53.895661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 09:10:53.896021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:53.909913 systemd[1]: Finished ensure-sysext.service.
Dec 13 09:10:53.913875 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 09:10:53.914079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 09:10:53.921188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 09:10:53.923588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 09:10:53.928986 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Dec 13 09:10:53.951857 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Dec 13 09:10:53.965881 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 09:10:53.966157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 09:10:53.967759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 09:10:53.984718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 09:10:53.984975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 09:10:53.987908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 09:10:53.997616 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Dec 13 09:10:54.002481 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Dec 13 09:10:54.003462 systemd-udevd[1326]: Using default interface naming scheme 'v255'.
Dec 13 09:10:54.011702 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Dec 13 09:10:54.012746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 09:10:54.026580 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Dec 13 09:10:54.071338 augenrules[1361]: No rules
Dec 13 09:10:54.074998 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Dec 13 09:10:54.083769 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Dec 13 09:10:54.094969 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 09:10:54.113455 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 09:10:54.174479 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Dec 13 09:10:54.175990 systemd[1]: Reached target time-set.target - System Time Set.
Dec 13 09:10:54.306771 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Dec 13 09:10:54.309136 systemd-resolved[1324]: Positive Trust Anchors:
Dec 13 09:10:54.310394 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 09:10:54.310462 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 09:10:54.325519 systemd-resolved[1324]: Using system hostname 'ci-4081.2.1-7-516c4b3017'.
Dec 13 09:10:54.326508 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1383)
Dec 13 09:10:54.335270 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 09:10:54.336712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 09:10:54.342376 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1383)
Dec 13 09:10:54.346439 systemd-networkd[1371]: lo: Link UP
Dec 13 09:10:54.346454 systemd-networkd[1371]: lo: Gained carrier
Dec 13 09:10:54.348633 systemd-networkd[1371]: Enumeration completed
Dec 13 09:10:54.349505 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 09:10:54.350542 systemd[1]: Reached target network.target - Network.
Dec 13 09:10:54.360828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Dec 13 09:10:54.392499 systemd[1]: Mounting media-configdrive.mount - /media/configdrive...
Dec 13 09:10:54.393479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:54.393695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 09:10:54.404279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 09:10:54.409534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 09:10:54.414681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 09:10:54.415698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 09:10:54.415755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 09:10:54.415778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 09:10:54.444339 kernel: ISO 9660 Extensions: RRIP_1991A
Dec 13 09:10:54.448093 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382)
Dec 13 09:10:54.447439 systemd[1]: Mounted media-configdrive.mount - /media/configdrive.
Dec 13 09:10:54.461926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 09:10:54.462404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 09:10:54.464091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 09:10:54.465779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 09:10:54.467195 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 09:10:54.467974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 09:10:54.478258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 09:10:54.484522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 09:10:54.524400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Dec 13 09:10:54.529359 kernel: ACPI: button: Power Button [PWRF]
Dec 13 09:10:54.558496 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-6e:41:ed:32:0d:70.network.
Dec 13 09:10:54.560131 systemd-networkd[1371]: eth0: Link UP
Dec 13 09:10:54.560145 systemd-networkd[1371]: eth0: Gained carrier
Dec 13 09:10:54.564708 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:54.574260 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-52:6d:32:12:f7:9f.network.
Dec 13 09:10:54.574985 systemd-networkd[1371]: eth1: Link UP
Dec 13 09:10:54.574995 systemd-networkd[1371]: eth1: Gained carrier
Dec 13 09:10:54.575904 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:54.578632 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 13 09:10:54.578427 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:54.578917 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:54.589729 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Dec 13 09:10:54.648729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:54.661316 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 09:10:54.680365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Dec 13 09:10:54.689706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Dec 13 09:10:54.729426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Dec 13 09:10:54.740990 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 13 09:10:54.741111 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 13 09:10:54.758337 kernel: Console: switching to colour dummy device 80x25
Dec 13 09:10:54.760577 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 13 09:10:54.760665 kernel: [drm] features: -context_init
Dec 13 09:10:54.762600 kernel: [drm] number of scanouts: 1
Dec 13 09:10:54.765318 kernel: [drm] number of cap sets: 0
Dec 13 09:10:54.767348 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0
Dec 13 09:10:54.777752 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 13 09:10:54.777879 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 09:10:54.797858 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 13 09:10:54.804848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:54.815662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 09:10:54.816249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:54.820569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:54.886727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:54.904591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 09:10:54.909761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:54.918695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 09:10:54.977563 kernel: EDAC MC: Ver: 3.0.0
Dec 13 09:10:55.012616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 09:10:55.035126 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Dec 13 09:10:55.043672 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Dec 13 09:10:55.081391 lvm[1432]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 09:10:55.117075 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Dec 13 09:10:55.117674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 09:10:55.117860 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 09:10:55.118285 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Dec 13 09:10:55.118480 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Dec 13 09:10:55.118838 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Dec 13 09:10:55.119054 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Dec 13 09:10:55.119151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Dec 13 09:10:55.119230 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 09:10:55.119261 systemd[1]: Reached target paths.target - Path Units.
Dec 13 09:10:55.119351 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 09:10:55.122607 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Dec 13 09:10:55.126606 systemd[1]: Starting docker.socket - Docker Socket for the API...
Dec 13 09:10:55.136040 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Dec 13 09:10:55.140181 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Dec 13 09:10:55.143548 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Dec 13 09:10:55.145218 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 09:10:55.145829 systemd[1]: Reached target basic.target - Basic System.
Dec 13 09:10:55.148788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Dec 13 09:10:55.148814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Dec 13 09:10:55.151514 systemd[1]: Starting containerd.service - containerd container runtime...
Dec 13 09:10:55.157420 lvm[1436]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 09:10:55.168536 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Dec 13 09:10:55.179648 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Dec 13 09:10:55.187508 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Dec 13 09:10:55.196840 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Dec 13 09:10:55.197555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Dec 13 09:10:55.206631 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Dec 13 09:10:55.218623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Dec 13 09:10:55.227619 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Dec 13 09:10:55.237611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Dec 13 09:10:55.248361 dbus-daemon[1439]: [system] SELinux support is enabled
Dec 13 09:10:55.252605 systemd[1]: Starting systemd-logind.service - User Login Management...
Dec 13 09:10:55.254070 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 09:10:55.258726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 09:10:55.264651 systemd[1]: Starting update-engine.service - Update Engine...
Dec 13 09:10:55.269140 jq[1440]: false
Dec 13 09:10:55.275515 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Dec 13 09:10:55.280898 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Dec 13 09:10:55.287495 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Dec 13 09:10:55.300819 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 09:10:55.302430 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Dec 13 09:10:55.319880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 09:10:55.319951 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Dec 13 09:10:55.322994 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 09:10:55.323095 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean).
Dec 13 09:10:55.323120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found loop4
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found loop5
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found loop6
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found loop7
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda1
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda2
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda3
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found usr
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda4
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda6
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda7
Dec 13 09:10:55.342446 extend-filesystems[1441]: Found vda9
Dec 13 09:10:55.488761 jq[1453]: true
Dec 13 09:10:55.349726 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 09:10:55.501405 coreos-metadata[1438]: Dec 13 09:10:55.423 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Dec 13 09:10:55.501405 coreos-metadata[1438]: Dec 13 09:10:55.423 INFO Fetch successful
Dec 13 09:10:55.503996 extend-filesystems[1441]: Checking size of /dev/vda9
Dec 13 09:10:55.503996 extend-filesystems[1441]: Resized partition /dev/vda9
Dec 13 09:10:55.622709 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks
Dec 13 09:10:55.622764 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1377)
Dec 13 09:10:55.622846 update_engine[1452]: I20241213 09:10:55.347836  1452 main.cc:92] Flatcar Update Engine starting
Dec 13 09:10:55.622846 update_engine[1452]: I20241213 09:10:55.383635  1452 update_check_scheduler.cc:74] Next update check in 4m36s
Dec 13 09:10:55.351480 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Dec 13 09:10:55.623507 tar[1459]: linux-amd64/helm
Dec 13 09:10:55.623910 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024)
Dec 13 09:10:55.372899 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 09:10:55.375018 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Dec 13 09:10:55.654028 jq[1468]: true
Dec 13 09:10:55.388026 systemd[1]: Started update-engine.service - Update Engine.
Dec 13 09:10:55.405621 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Dec 13 09:10:55.514972 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Dec 13 09:10:55.601545 systemd-logind[1450]: New seat seat0.
Dec 13 09:10:55.652969 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 09:10:55.652998 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 09:10:55.654168 systemd[1]: Started systemd-logind.service - User Login Management.
Dec 13 09:10:55.716040 systemd-networkd[1371]: eth1: Gained IPv6LL
Dec 13 09:10:55.717710 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:55.748629 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Dec 13 09:10:55.755507 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Dec 13 09:10:55.758026 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Dec 13 09:10:55.769215 systemd[1]: Reached target network-online.target - Network is Online.
Dec 13 09:10:55.798770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:10:55.804574 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Dec 13 09:10:55.853342 kernel: EXT4-fs (vda9): resized filesystem to 15121403
Dec 13 09:10:55.888842 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Dec 13 09:10:55.888842 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 8
Dec 13 09:10:55.888842 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long.
Dec 13 09:10:55.915586 extend-filesystems[1441]: Resized filesystem in /dev/vda9
Dec 13 09:10:55.915586 extend-filesystems[1441]: Found vdb
Dec 13 09:10:55.920530 bash[1503]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 09:10:55.898384 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 09:10:55.899481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Dec 13 09:10:55.922612 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Dec 13 09:10:55.948786 systemd[1]: Starting sshkeys.service...
Dec 13 09:10:56.046440 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Dec 13 09:10:56.056530 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Dec 13 09:10:56.070588 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Dec 13 09:10:56.098827 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 09:10:56.099772 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 09:10:56.221556 coreos-metadata[1527]: Dec 13 09:10:56.221 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Dec 13 09:10:56.243330 coreos-metadata[1527]: Dec 13 09:10:56.242 INFO Fetch successful
Dec 13 09:10:56.251395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Dec 13 09:10:56.274014 systemd[1]: Starting issuegen.service - Generate /run/issue...
Dec 13 09:10:56.297712 unknown[1527]: wrote ssh authorized keys file for user: core
Dec 13 09:10:56.328333 containerd[1473]: time="2024-12-13T09:10:56.326488399Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Dec 13 09:10:56.363149 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 09:10:56.364535 systemd[1]: Finished issuegen.service - Generate /run/issue.
Dec 13 09:10:56.381024 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Dec 13 09:10:56.389971 update-ssh-keys[1540]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 09:10:56.391536 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Dec 13 09:10:56.397194 systemd[1]: Finished sshkeys.service.
Dec 13 09:10:56.447222 containerd[1473]: time="2024-12-13T09:10:56.446319184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.452550997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.452617057Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.452643656Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.452882570Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.452909453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453052026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453076560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453381515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453406242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453427212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454345 containerd[1473]: time="2024-12-13T09:10:56.453442989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454802 containerd[1473]: time="2024-12-13T09:10:56.453674326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454802 containerd[1473]: time="2024-12-13T09:10:56.454014613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454802 containerd[1473]: time="2024-12-13T09:10:56.454232633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 09:10:56.454802 containerd[1473]: time="2024-12-13T09:10:56.454259942Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 09:10:56.460034 containerd[1473]: time="2024-12-13T09:10:56.459948822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 09:10:56.460203 containerd[1473]: time="2024-12-13T09:10:56.460101764Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 09:10:56.471535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.474783070Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.474905980Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.474933116Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.474957241Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.474980648Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475271893Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475631805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475788538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475805912Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475824384Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475845828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475866389Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475890345Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476258 containerd[1473]: time="2024-12-13T09:10:56.475916909Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.475939584Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.475960580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.475974394Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.475986851Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476009497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476024630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476040545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476060524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476080029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476099780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476120391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476139931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476153488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.476821 containerd[1473]: time="2024-12-13T09:10:56.476175886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.477174 containerd[1473]: time="2024-12-13T09:10:56.476194890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.477174 containerd[1473]: time="2024-12-13T09:10:56.476242280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.477174 containerd[1473]: time="2024-12-13T09:10:56.476271259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.482094 systemd-networkd[1371]: eth0: Gained IPv6LL
Dec 13 09:10:56.482619 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486645901Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486742695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486770885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486784077Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486855754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486876254Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486893885Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 09:10:56.486885 containerd[1473]: time="2024-12-13T09:10:56.486906788Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Dec 13 09:10:56.487281 containerd[1473]: time="2024-12-13T09:10:56.486919677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.487281 containerd[1473]: time="2024-12-13T09:10:56.486958253Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Dec 13 09:10:56.487281 containerd[1473]: time="2024-12-13T09:10:56.486983340Z" level=info msg="NRI interface is disabled by configuration."
Dec 13 09:10:56.487281 containerd[1473]: time="2024-12-13T09:10:56.486999177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 09:10:56.490856 systemd[1]: Started getty@tty1.service - Getty on tty1.
Dec 13 09:10:56.495944 containerd[1473]: time="2024-12-13T09:10:56.495522808Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 09:10:56.495944 containerd[1473]: time="2024-12-13T09:10:56.495638165Z" level=info msg="Connect containerd service"
Dec 13 09:10:56.495944 containerd[1473]: time="2024-12-13T09:10:56.495725129Z" level=info msg="using legacy CRI server"
Dec 13 09:10:56.495944 containerd[1473]: time="2024-12-13T09:10:56.495738652Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 09:10:56.496419 containerd[1473]: time="2024-12-13T09:10:56.495943443Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 09:10:56.501934 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Dec 13 09:10:56.504264 systemd[1]: Reached target getty.target - Login Prompts.
Dec 13 09:10:56.514220 containerd[1473]: time="2024-12-13T09:10:56.514009466Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514578308Z" level=info msg="Start subscribing containerd event"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514669579Z" level=info msg="Start recovering state"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514830152Z" level=info msg="Start event monitor"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514866647Z" level=info msg="Start snapshots syncer"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514883660Z" level=info msg="Start cni network conf syncer for default"
Dec 13 09:10:56.515569 containerd[1473]: time="2024-12-13T09:10:56.514896380Z" level=info msg="Start streaming server"
Dec 13 09:10:56.515928 containerd[1473]: time="2024-12-13T09:10:56.515711987Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 09:10:56.515928 containerd[1473]: time="2024-12-13T09:10:56.515809061Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 09:10:56.515928 containerd[1473]: time="2024-12-13T09:10:56.515883937Z" level=info msg="containerd successfully booted in 0.208710s"
Dec 13 09:10:56.516101 systemd[1]: Started containerd.service - containerd container runtime.
Dec 13 09:10:57.015056 tar[1459]: linux-amd64/LICENSE
Dec 13 09:10:57.015056 tar[1459]: linux-amd64/README.md
Dec 13 09:10:57.037407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Dec 13 09:10:57.453941 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Dec 13 09:10:57.469489 systemd[1]: Started sshd@0-165.232.145.99:22-147.75.109.163:57910.service - OpenSSH per-connection server daemon (147.75.109.163:57910).
Dec 13 09:10:57.581427 sshd[1559]: Accepted publickey for core from 147.75.109.163 port 57910 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:57.585272 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:57.604598 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Dec 13 09:10:57.616821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Dec 13 09:10:57.628282 systemd-logind[1450]: New session 1 of user core.
Dec 13 09:10:57.667097 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Dec 13 09:10:57.679594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:10:57.690257 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 09:10:57.695505 systemd[1]: Reached target multi-user.target - Multi-User System.
Dec 13 09:10:57.707843 systemd[1]: Starting user@500.service - User Manager for UID 500...
Dec 13 09:10:57.728396 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 09:10:57.893720 systemd[1569]: Queued start job for default target default.target.
Dec 13 09:10:57.900624 systemd[1569]: Created slice app.slice - User Application Slice.
Dec 13 09:10:57.900675 systemd[1569]: Reached target paths.target - Paths.
Dec 13 09:10:57.900699 systemd[1569]: Reached target timers.target - Timers.
Dec 13 09:10:57.910667 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket...
Dec 13 09:10:57.944670 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Dec 13 09:10:57.944894 systemd[1569]: Reached target sockets.target - Sockets.
Dec 13 09:10:57.944928 systemd[1569]: Reached target basic.target - Basic System.
Dec 13 09:10:57.945000 systemd[1569]: Reached target default.target - Main User Target.
Dec 13 09:10:57.945052 systemd[1569]: Startup finished in 205ms.
Dec 13 09:10:57.945280 systemd[1]: Started user@500.service - User Manager for UID 500.
Dec 13 09:10:57.960672 systemd[1]: Started session-1.scope - Session 1 of User core.
Dec 13 09:10:57.966412 systemd[1]: Startup finished in 1.603s (kernel) + 6.972s (initrd) + 8.215s (userspace) = 16.792s.
Dec 13 09:10:58.066859 systemd[1]: Started sshd@1-165.232.145.99:22-147.75.109.163:57924.service - OpenSSH per-connection server daemon (147.75.109.163:57924).
Dec 13 09:10:58.151718 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 57924 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:58.155092 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:58.172883 systemd-logind[1450]: New session 2 of user core.
Dec 13 09:10:58.178674 systemd[1]: Started session-2.scope - Session 2 of User core.
Dec 13 09:10:58.251641 sshd[1588]: pam_unix(sshd:session): session closed for user core
Dec 13 09:10:58.267380 systemd[1]: sshd@1-165.232.145.99:22-147.75.109.163:57924.service: Deactivated successfully.
Dec 13 09:10:58.272140 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 09:10:58.274025 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit.
Dec 13 09:10:58.285501 systemd[1]: Started sshd@2-165.232.145.99:22-147.75.109.163:57930.service - OpenSSH per-connection server daemon (147.75.109.163:57930).
Dec 13 09:10:58.289569 systemd-logind[1450]: Removed session 2.
Dec 13 09:10:58.361160 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 57930 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:58.365515 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:58.374153 systemd-logind[1450]: New session 3 of user core.
Dec 13 09:10:58.383759 systemd[1]: Started session-3.scope - Session 3 of User core.
Dec 13 09:10:58.456607 sshd[1595]: pam_unix(sshd:session): session closed for user core
Dec 13 09:10:58.467612 systemd[1]: sshd@2-165.232.145.99:22-147.75.109.163:57930.service: Deactivated successfully.
Dec 13 09:10:58.471704 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 09:10:58.476423 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit.
Dec 13 09:10:58.486715 systemd[1]: Started sshd@3-165.232.145.99:22-147.75.109.163:57936.service - OpenSSH per-connection server daemon (147.75.109.163:57936).
Dec 13 09:10:58.488824 systemd-logind[1450]: Removed session 3.
Dec 13 09:10:58.542632 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 57936 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:58.546436 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:58.554685 systemd-logind[1450]: New session 4 of user core.
Dec 13 09:10:58.567650 systemd[1]: Started session-4.scope - Session 4 of User core.
Dec 13 09:10:58.638654 sshd[1603]: pam_unix(sshd:session): session closed for user core
Dec 13 09:10:58.643761 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit.
Dec 13 09:10:58.645113 systemd[1]: sshd@3-165.232.145.99:22-147.75.109.163:57936.service: Deactivated successfully.
Dec 13 09:10:58.650080 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 09:10:58.668376 systemd[1]: Started sshd@4-165.232.145.99:22-147.75.109.163:57952.service - OpenSSH per-connection server daemon (147.75.109.163:57952).
Dec 13 09:10:58.670927 systemd-logind[1450]: Removed session 4.
Dec 13 09:10:58.683368 kubelet[1565]: E1213 09:10:58.683219    1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 09:10:58.687990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 09:10:58.688346 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 09:10:58.689140 systemd[1]: kubelet.service: Consumed 1.549s CPU time.
Dec 13 09:10:58.748475 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 57952 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:58.750904 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:58.759273 systemd-logind[1450]: New session 5 of user core.
Dec 13 09:10:58.768841 systemd[1]: Started session-5.scope - Session 5 of User core.
Dec 13 09:10:58.853405 sudo[1614]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Dec 13 09:10:58.853851 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 09:10:58.877718 sudo[1614]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:58.884811 sshd[1610]: pam_unix(sshd:session): session closed for user core
Dec 13 09:10:58.894968 systemd[1]: sshd@4-165.232.145.99:22-147.75.109.163:57952.service: Deactivated successfully.
Dec 13 09:10:58.897799 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 09:10:58.901653 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit.
Dec 13 09:10:58.907956 systemd[1]: Started sshd@5-165.232.145.99:22-147.75.109.163:57954.service - OpenSSH per-connection server daemon (147.75.109.163:57954).
Dec 13 09:10:58.910273 systemd-logind[1450]: Removed session 5.
Dec 13 09:10:58.966634 sshd[1619]: Accepted publickey for core from 147.75.109.163 port 57954 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:58.970711 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:58.985457 systemd-logind[1450]: New session 6 of user core.
Dec 13 09:10:58.991780 systemd[1]: Started session-6.scope - Session 6 of User core.
Dec 13 09:10:59.061348 sudo[1623]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Dec 13 09:10:59.062489 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 09:10:59.069453 sudo[1623]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:59.078898 sudo[1622]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Dec 13 09:10:59.080103 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 09:10:59.144562 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Dec 13 09:10:59.147145 auditctl[1626]: No rules
Dec 13 09:10:59.148221 systemd[1]: audit-rules.service: Deactivated successfully.
Dec 13 09:10:59.148596 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Dec 13 09:10:59.160071 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Dec 13 09:10:59.254624 augenrules[1644]: No rules
Dec 13 09:10:59.257578 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Dec 13 09:10:59.265328 sudo[1622]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:59.271702 sshd[1619]: pam_unix(sshd:session): session closed for user core
Dec 13 09:10:59.296936 systemd[1]: sshd@5-165.232.145.99:22-147.75.109.163:57954.service: Deactivated successfully.
Dec 13 09:10:59.299999 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 09:10:59.302548 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit.
Dec 13 09:10:59.314838 systemd[1]: Started sshd@6-165.232.145.99:22-147.75.109.163:57966.service - OpenSSH per-connection server daemon (147.75.109.163:57966).
Dec 13 09:10:59.317234 systemd-logind[1450]: Removed session 6.
Dec 13 09:10:59.386563 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 57966 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:10:59.389415 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:10:59.398184 systemd-logind[1450]: New session 7 of user core.
Dec 13 09:10:59.412800 systemd[1]: Started session-7.scope - Session 7 of User core.
Dec 13 09:10:59.485778 sudo[1655]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 09:10:59.487025 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 09:11:00.205183 systemd[1]: Starting docker.service - Docker Application Container Engine...
Dec 13 09:11:00.208740 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Dec 13 09:11:00.707822 systemd[1]: Started sshd@7-165.232.145.99:22-218.92.0.166:25273.service - OpenSSH per-connection server daemon (218.92.0.166:25273).
Dec 13 09:11:01.326641 dockerd[1672]: time="2024-12-13T09:11:01.326551365Z" level=info msg="Starting up"
Dec 13 09:11:01.618735 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport744154703-merged.mount: Deactivated successfully.
Dec 13 09:11:01.735663 dockerd[1672]: time="2024-12-13T09:11:01.735565159Z" level=info msg="Loading containers: start."
Dec 13 09:11:02.027140 sshd[1699]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:11:02.147754 kernel: Initializing XFRM netlink socket
Dec 13 09:11:02.245614 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:11:02.249013 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:11:02.276493 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:11:02.474851 systemd-networkd[1371]: docker0: Link UP
Dec 13 09:11:02.475992 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection.
Dec 13 09:11:02.511768 dockerd[1672]: time="2024-12-13T09:11:02.511187157Z" level=info msg="Loading containers: done."
Dec 13 09:11:02.547718 dockerd[1672]: time="2024-12-13T09:11:02.547214161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 09:11:02.547718 dockerd[1672]: time="2024-12-13T09:11:02.547466206Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Dec 13 09:11:02.548632 dockerd[1672]: time="2024-12-13T09:11:02.547673401Z" level=info msg="Daemon has completed initialization"
Dec 13 09:11:02.617850 dockerd[1672]: time="2024-12-13T09:11:02.617245536Z" level=info msg="API listen on /run/docker.sock"
Dec 13 09:11:02.621999 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 13 09:11:04.032345 containerd[1473]: time="2024-12-13T09:11:04.031991341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\""
Dec 13 09:11:04.117473 sshd[1678]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:11:04.449459 sshd[1823]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:11:04.872970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800285927.mount: Deactivated successfully.
Dec 13 09:11:06.283240 sshd[1678]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:11:06.609466 sshd[1878]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:11:06.669604 containerd[1473]: time="2024-12-13T09:11:06.669479813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:06.672780 containerd[1473]: time="2024-12-13T09:11:06.672328975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483"
Dec 13 09:11:06.674760 containerd[1473]: time="2024-12-13T09:11:06.674705654Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:06.680530 containerd[1473]: time="2024-12-13T09:11:06.680426345Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.64836684s"
Dec 13 09:11:06.680530 containerd[1473]: time="2024-12-13T09:11:06.680498869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\""
Dec 13 09:11:06.681008 containerd[1473]: time="2024-12-13T09:11:06.680686075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:06.683797 containerd[1473]: time="2024-12-13T09:11:06.683710453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\""
Dec 13 09:11:08.050640 sshd[1678]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:11:08.211946 sshd[1678]: Received disconnect from 218.92.0.166 port 25273:11:  [preauth]
Dec 13 09:11:08.211946 sshd[1678]: Disconnected from authenticating user root 218.92.0.166 port 25273 [preauth]
Dec 13 09:11:08.216221 systemd[1]: sshd@7-165.232.145.99:22-218.92.0.166:25273.service: Deactivated successfully.
Dec 13 09:11:08.730352 containerd[1473]: time="2024-12-13T09:11:08.728317827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:08.730352 containerd[1473]: time="2024-12-13T09:11:08.730330338Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157"
Dec 13 09:11:08.732048 containerd[1473]: time="2024-12-13T09:11:08.731281456Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:08.736948 containerd[1473]: time="2024-12-13T09:11:08.736870469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:08.739867 containerd[1473]: time="2024-12-13T09:11:08.739790430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.056011711s"
Dec 13 09:11:08.740408 containerd[1473]: time="2024-12-13T09:11:08.740251915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\""
Dec 13 09:11:08.741391 containerd[1473]: time="2024-12-13T09:11:08.741085793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\""
Dec 13 09:11:08.869615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 09:11:08.877655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:09.116662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:09.124326 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 09:11:09.217391 kubelet[1893]: E1213 09:11:09.217269    1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 09:11:09.225938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 09:11:09.226391 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 09:11:10.686994 containerd[1473]: time="2024-12-13T09:11:10.686893600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:10.690177 containerd[1473]: time="2024-12-13T09:11:10.689935050Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067"
Dec 13 09:11:10.691650 containerd[1473]: time="2024-12-13T09:11:10.691553520Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:10.698470 containerd[1473]: time="2024-12-13T09:11:10.698385870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:10.699477 containerd[1473]: time="2024-12-13T09:11:10.699229825Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.95809613s"
Dec 13 09:11:10.699477 containerd[1473]: time="2024-12-13T09:11:10.699326493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\""
Dec 13 09:11:10.700687 containerd[1473]: time="2024-12-13T09:11:10.700396944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\""
Dec 13 09:11:10.704036 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2.
Dec 13 09:11:12.288085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422552465.mount: Deactivated successfully.
Dec 13 09:11:13.536927 containerd[1473]: time="2024-12-13T09:11:13.536839545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:13.538496 containerd[1473]: time="2024-12-13T09:11:13.538409631Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243"
Dec 13 09:11:13.539770 containerd[1473]: time="2024-12-13T09:11:13.539445216Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:13.543651 containerd[1473]: time="2024-12-13T09:11:13.543576605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:13.544578 containerd[1473]: time="2024-12-13T09:11:13.544531341Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.844083391s"
Dec 13 09:11:13.544578 containerd[1473]: time="2024-12-13T09:11:13.544575601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\""
Dec 13 09:11:13.545530 containerd[1473]: time="2024-12-13T09:11:13.545476066Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 09:11:13.760395 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3.
Dec 13 09:11:14.233871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504083699.mount: Deactivated successfully.
Dec 13 09:11:15.920349 containerd[1473]: time="2024-12-13T09:11:15.918790497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:15.920871 containerd[1473]: time="2024-12-13T09:11:15.920762206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761"
Dec 13 09:11:15.921165 containerd[1473]: time="2024-12-13T09:11:15.921132993Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:15.926376 containerd[1473]: time="2024-12-13T09:11:15.926301246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:15.926851 containerd[1473]: time="2024-12-13T09:11:15.926805029Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.381282714s"
Dec 13 09:11:15.926959 containerd[1473]: time="2024-12-13T09:11:15.926853900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 09:11:15.927686 containerd[1473]: time="2024-12-13T09:11:15.927499577Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Dec 13 09:11:16.523352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406985392.mount: Deactivated successfully.
Dec 13 09:11:16.533601 containerd[1473]: time="2024-12-13T09:11:16.533442515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:16.536605 containerd[1473]: time="2024-12-13T09:11:16.536489854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138"
Dec 13 09:11:16.538574 containerd[1473]: time="2024-12-13T09:11:16.538459584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:16.542486 containerd[1473]: time="2024-12-13T09:11:16.542405322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:16.548888 containerd[1473]: time="2024-12-13T09:11:16.544116153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 616.573843ms"
Dec 13 09:11:16.548888 containerd[1473]: time="2024-12-13T09:11:16.544183480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\""
Dec 13 09:11:16.549821 containerd[1473]: time="2024-12-13T09:11:16.549749365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Dec 13 09:11:17.004556 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3.
Dec 13 09:11:17.228048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356202089.mount: Deactivated successfully.
Dec 13 09:11:19.370006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 09:11:19.379435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:19.851772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:19.864487 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 09:11:20.028936 kubelet[2020]: E1213 09:11:20.028769    2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 09:11:20.033005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 09:11:20.034172 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 09:11:20.470449 containerd[1473]: time="2024-12-13T09:11:20.470280157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:20.476352 containerd[1473]: time="2024-12-13T09:11:20.474175073Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973"
Dec 13 09:11:20.476352 containerd[1473]: time="2024-12-13T09:11:20.474830303Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:20.486317 containerd[1473]: time="2024-12-13T09:11:20.486190706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:20.488844 containerd[1473]: time="2024-12-13T09:11:20.488670609Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.938858122s"
Dec 13 09:11:20.489090 containerd[1473]: time="2024-12-13T09:11:20.489061913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\""
Dec 13 09:11:23.323874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:23.330734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:23.376249 systemd[1]: Reloading requested from client PID 2053 ('systemctl') (unit session-7.scope)...
Dec 13 09:11:23.376638 systemd[1]: Reloading...
Dec 13 09:11:23.531334 zram_generator::config[2092]: No configuration found.
Dec 13 09:11:23.686979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 09:11:23.773909 systemd[1]: Reloading finished in 396 ms.
Dec 13 09:11:23.839392 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 09:11:23.843367 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:23.844285 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 09:11:23.844536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:23.852819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:23.977788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:23.991815 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 09:11:24.061320 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 09:11:24.062000 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 09:11:24.062130 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 09:11:24.063682 kubelet[2151]: I1213 09:11:24.063595    2151 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 09:11:24.315434 systemd[1]: Started sshd@8-165.232.145.99:22-85.208.253.246:35416.service - OpenSSH per-connection server daemon (85.208.253.246:35416).
Dec 13 09:11:24.803720 kubelet[2151]: I1213 09:11:24.803643    2151 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 09:11:24.803720 kubelet[2151]: I1213 09:11:24.803703    2151 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 09:11:24.804119 kubelet[2151]: I1213 09:11:24.804093    2151 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 09:11:24.827005 kubelet[2151]: I1213 09:11:24.826949    2151 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 09:11:24.827666 kubelet[2151]: E1213 09:11:24.827591    2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://165.232.145.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:24.845541 kubelet[2151]: E1213 09:11:24.845430    2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 09:11:24.845541 kubelet[2151]: I1213 09:11:24.845491    2151 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 09:11:24.852630 kubelet[2151]: I1213 09:11:24.852551    2151 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 09:11:24.852833 kubelet[2151]: I1213 09:11:24.852773    2151 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 09:11:24.853013 kubelet[2151]: I1213 09:11:24.852953    2151 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 09:11:24.853311 kubelet[2151]: I1213 09:11:24.853005    2151 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-7-516c4b3017","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 09:11:24.853506 kubelet[2151]: I1213 09:11:24.853329    2151 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 09:11:24.853506 kubelet[2151]: I1213 09:11:24.853346    2151 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 09:11:24.853586 kubelet[2151]: I1213 09:11:24.853517    2151 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 09:11:24.856344 kubelet[2151]: I1213 09:11:24.855892    2151 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 09:11:24.856344 kubelet[2151]: I1213 09:11:24.855950    2151 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 09:11:24.856344 kubelet[2151]: I1213 09:11:24.856003    2151 kubelet.go:314] "Adding apiserver pod source"
Dec 13 09:11:24.856344 kubelet[2151]: I1213 09:11:24.856027    2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 09:11:24.868593 kubelet[2151]: I1213 09:11:24.868525    2151 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 09:11:24.871019 kubelet[2151]: I1213 09:11:24.870741    2151 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 09:11:24.871622 kubelet[2151]: W1213 09:11:24.871553    2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 09:11:24.872985 kubelet[2151]: I1213 09:11:24.872448    2151 server.go:1269] "Started kubelet"
Dec 13 09:11:24.872985 kubelet[2151]: W1213 09:11:24.872667    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.145.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-516c4b3017&limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:24.872985 kubelet[2151]: E1213 09:11:24.872740    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.145.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-516c4b3017&limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:24.872985 kubelet[2151]: W1213 09:11:24.872723    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.145.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:24.872985 kubelet[2151]: E1213 09:11:24.872794    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.145.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:24.873274 kubelet[2151]: I1213 09:11:24.873236    2151 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 09:11:24.874079 kubelet[2151]: I1213 09:11:24.874005    2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 09:11:24.875330 kubelet[2151]: I1213 09:11:24.874745    2151 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 09:11:24.876381 kubelet[2151]: I1213 09:11:24.875736    2151 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 09:11:24.879242 kubelet[2151]: I1213 09:11:24.879204    2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 09:11:24.881690 kubelet[2151]: E1213 09:11:24.878591    2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.145.99:6443/api/v1/namespaces/default/events\": dial tcp 165.232.145.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-7-516c4b3017.1810b18f238b4835  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-516c4b3017,UID:ci-4081.2.1-7-516c4b3017,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-516c4b3017,},FirstTimestamp:2024-12-13 09:11:24.872411189 +0000 UTC m=+0.875768833,LastTimestamp:2024-12-13 09:11:24.872411189 +0000 UTC m=+0.875768833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-516c4b3017,}"
Dec 13 09:11:24.882669 kubelet[2151]: I1213 09:11:24.882629    2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 09:11:24.886551 kubelet[2151]: I1213 09:11:24.885723    2151 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 09:11:24.886551 kubelet[2151]: I1213 09:11:24.885890    2151 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 09:11:24.886551 kubelet[2151]: I1213 09:11:24.885989    2151 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 09:11:24.886800 kubelet[2151]: W1213 09:11:24.886594    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.145.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:24.886800 kubelet[2151]: E1213 09:11:24.886672    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.145.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:24.887522 kubelet[2151]: E1213 09:11:24.887148    2151 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 09:11:24.890024 kubelet[2151]: E1213 09:11:24.889950    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:24.890319 kubelet[2151]: E1213 09:11:24.890241    2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.145.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-516c4b3017?timeout=10s\": dial tcp 165.232.145.99:6443: connect: connection refused" interval="200ms"
Dec 13 09:11:24.890600 kubelet[2151]: I1213 09:11:24.890570    2151 factory.go:221] Registration of the systemd container factory successfully
Dec 13 09:11:24.890695 kubelet[2151]: I1213 09:11:24.890669    2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 09:11:24.895334 kubelet[2151]: I1213 09:11:24.894769    2151 factory.go:221] Registration of the containerd container factory successfully
Dec 13 09:11:24.920077 kubelet[2151]: I1213 09:11:24.919970    2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 09:11:24.923221 kubelet[2151]: I1213 09:11:24.923180    2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 09:11:24.923221 kubelet[2151]: I1213 09:11:24.923232    2151 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 09:11:24.923421 kubelet[2151]: I1213 09:11:24.923254    2151 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 09:11:24.923421 kubelet[2151]: E1213 09:11:24.923377    2151 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 09:11:24.940195 kubelet[2151]: W1213 09:11:24.940119    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.145.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:24.941339 kubelet[2151]: E1213 09:11:24.940429    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.145.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:24.943247 kubelet[2151]: I1213 09:11:24.943211    2151 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 09:11:24.944458 kubelet[2151]: I1213 09:11:24.944416    2151 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 09:11:24.944458 kubelet[2151]: I1213 09:11:24.944462    2151 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 09:11:24.949529 kubelet[2151]: I1213 09:11:24.949469    2151 policy_none.go:49] "None policy: Start"
Dec 13 09:11:24.950829 kubelet[2151]: I1213 09:11:24.950802    2151 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 09:11:24.950929 kubelet[2151]: I1213 09:11:24.950846    2151 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 09:11:24.963379 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Dec 13 09:11:24.976835 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Dec 13 09:11:24.982515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Dec 13 09:11:24.990152 kubelet[2151]: E1213 09:11:24.990072    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:24.990709 kubelet[2151]: I1213 09:11:24.990681    2151 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 09:11:24.990965 kubelet[2151]: I1213 09:11:24.990944    2151 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 09:11:24.991036 kubelet[2151]: I1213 09:11:24.990964    2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 09:11:24.991782 kubelet[2151]: I1213 09:11:24.991729    2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 09:11:24.994454 kubelet[2151]: E1213 09:11:24.994417    2151 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:25.038976 systemd[1]: Created slice kubepods-burstable-pod452b2e1358c7e9f80977d484e05d540b.slice - libcontainer container kubepods-burstable-pod452b2e1358c7e9f80977d484e05d540b.slice.
Dec 13 09:11:25.066626 systemd[1]: Created slice kubepods-burstable-pod11e4248dbe64b1f90198a1904f8f3b42.slice - libcontainer container kubepods-burstable-pod11e4248dbe64b1f90198a1904f8f3b42.slice.
Dec 13 09:11:25.087223 kubelet[2151]: I1213 09:11:25.086940    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087223 kubelet[2151]: I1213 09:11:25.086989    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087223 kubelet[2151]: I1213 09:11:25.087021    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087223 kubelet[2151]: I1213 09:11:25.087050    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087223 kubelet[2151]: I1213 09:11:25.087075    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087800 kubelet[2151]: I1213 09:11:25.087093    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6f95b01e32296b917fd901e703bc0cf-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-7-516c4b3017\" (UID: \"e6f95b01e32296b917fd901e703bc0cf\") " pod="kube-system/kube-scheduler-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087800 kubelet[2151]: I1213 09:11:25.087110    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087800 kubelet[2151]: I1213 09:11:25.087147    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.087800 kubelet[2151]: I1213 09:11:25.087165    2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.090452 systemd[1]: Created slice kubepods-burstable-pode6f95b01e32296b917fd901e703bc0cf.slice - libcontainer container kubepods-burstable-pode6f95b01e32296b917fd901e703bc0cf.slice.
Dec 13 09:11:25.091516 kubelet[2151]: E1213 09:11:25.091463    2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.145.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-516c4b3017?timeout=10s\": dial tcp 165.232.145.99:6443: connect: connection refused" interval="400ms"
Dec 13 09:11:25.092939 kubelet[2151]: I1213 09:11:25.092492    2151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.092939 kubelet[2151]: E1213 09:11:25.092855    2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.145.99:6443/api/v1/nodes\": dial tcp 165.232.145.99:6443: connect: connection refused" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.294940 kubelet[2151]: I1213 09:11:25.294888    2151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.295406 kubelet[2151]: E1213 09:11:25.295368    2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.145.99:6443/api/v1/nodes\": dial tcp 165.232.145.99:6443: connect: connection refused" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.361464 kubelet[2151]: E1213 09:11:25.361175    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:25.363220 containerd[1473]: time="2024-12-13T09:11:25.363128930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-7-516c4b3017,Uid:452b2e1358c7e9f80977d484e05d540b,Namespace:kube-system,Attempt:0,}"
Dec 13 09:11:25.386219 kubelet[2151]: E1213 09:11:25.385547    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:25.393028 containerd[1473]: time="2024-12-13T09:11:25.392937681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-7-516c4b3017,Uid:11e4248dbe64b1f90198a1904f8f3b42,Namespace:kube-system,Attempt:0,}"
Dec 13 09:11:25.395335 kubelet[2151]: E1213 09:11:25.395200    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:25.396116 containerd[1473]: time="2024-12-13T09:11:25.395848563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-7-516c4b3017,Uid:e6f95b01e32296b917fd901e703bc0cf,Namespace:kube-system,Attempt:0,}"
Dec 13 09:11:25.492562 kubelet[2151]: E1213 09:11:25.492459    2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.145.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-516c4b3017?timeout=10s\": dial tcp 165.232.145.99:6443: connect: connection refused" interval="800ms"
Dec 13 09:11:25.582631 sshd[2158]: Invalid user kj from 85.208.253.246 port 35416
Dec 13 09:11:25.696965 kubelet[2151]: I1213 09:11:25.696925    2151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.697581 kubelet[2151]: E1213 09:11:25.697419    2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.145.99:6443/api/v1/nodes\": dial tcp 165.232.145.99:6443: connect: connection refused" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:25.821708 sshd[2158]: Received disconnect from 85.208.253.246 port 35416:11: Bye Bye [preauth]
Dec 13 09:11:25.821708 sshd[2158]: Disconnected from invalid user kj 85.208.253.246 port 35416 [preauth]
Dec 13 09:11:25.824525 systemd[1]: sshd@8-165.232.145.99:22-85.208.253.246:35416.service: Deactivated successfully.
Dec 13 09:11:25.833605 kubelet[2151]: W1213 09:11:25.833490    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.145.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:25.833605 kubelet[2151]: E1213 09:11:25.833557    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.145.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:25.913742 kubelet[2151]: W1213 09:11:25.913593    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.145.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:25.913742 kubelet[2151]: E1213 09:11:25.913694    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.145.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:25.941356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701354521.mount: Deactivated successfully.
Dec 13 09:11:25.950985 containerd[1473]: time="2024-12-13T09:11:25.950636078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 09:11:25.954310 containerd[1473]: time="2024-12-13T09:11:25.954197330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Dec 13 09:11:25.954958 containerd[1473]: time="2024-12-13T09:11:25.954907259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 09:11:25.956155 containerd[1473]: time="2024-12-13T09:11:25.956052061Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 09:11:25.958866 containerd[1473]: time="2024-12-13T09:11:25.958784573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 09:11:25.959311 containerd[1473]: time="2024-12-13T09:11:25.959161749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 09:11:25.961489 containerd[1473]: time="2024-12-13T09:11:25.961416971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 09:11:25.967641 containerd[1473]: time="2024-12-13T09:11:25.967557136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 09:11:25.968807 containerd[1473]: time="2024-12-13T09:11:25.968436602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.502402ms"
Dec 13 09:11:25.970168 containerd[1473]: time="2024-12-13T09:11:25.969517597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.265295ms"
Dec 13 09:11:25.973191 containerd[1473]: time="2024-12-13T09:11:25.973032123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.954598ms"
Dec 13 09:11:26.187816 containerd[1473]: time="2024-12-13T09:11:26.187537290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:26.187816 containerd[1473]: time="2024-12-13T09:11:26.187627885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:26.187816 containerd[1473]: time="2024-12-13T09:11:26.187641003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.190653 containerd[1473]: time="2024-12-13T09:11:26.187754253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.195131 containerd[1473]: time="2024-12-13T09:11:26.194867453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:26.195131 containerd[1473]: time="2024-12-13T09:11:26.194948307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:26.195131 containerd[1473]: time="2024-12-13T09:11:26.194965584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.198412 containerd[1473]: time="2024-12-13T09:11:26.196088165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.201956 containerd[1473]: time="2024-12-13T09:11:26.201441808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:26.201956 containerd[1473]: time="2024-12-13T09:11:26.201655005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:26.201956 containerd[1473]: time="2024-12-13T09:11:26.201785635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.203189 containerd[1473]: time="2024-12-13T09:11:26.202018026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:26.240668 systemd[1]: Started cri-containerd-9054b482aba4d671c8225d67ad29e10a3ee3548b77e30dfccd8725689e721ff9.scope - libcontainer container 9054b482aba4d671c8225d67ad29e10a3ee3548b77e30dfccd8725689e721ff9.
Dec 13 09:11:26.248215 systemd[1]: Started cri-containerd-37cccdde285abed60ba569366154efd5b92b9f19469eb7a0f74e8576d541ac43.scope - libcontainer container 37cccdde285abed60ba569366154efd5b92b9f19469eb7a0f74e8576d541ac43.
Dec 13 09:11:26.265638 systemd[1]: Started cri-containerd-970bda6c5af30118df4e56f3d5a1d377a6a3a4f0cf160c25f7c0942a131f1b83.scope - libcontainer container 970bda6c5af30118df4e56f3d5a1d377a6a3a4f0cf160c25f7c0942a131f1b83.
Dec 13 09:11:26.295420 kubelet[2151]: E1213 09:11:26.294212    2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.145.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-516c4b3017?timeout=10s\": dial tcp 165.232.145.99:6443: connect: connection refused" interval="1.6s"
Dec 13 09:11:26.374346 containerd[1473]: time="2024-12-13T09:11:26.373166982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-7-516c4b3017,Uid:e6f95b01e32296b917fd901e703bc0cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"37cccdde285abed60ba569366154efd5b92b9f19469eb7a0f74e8576d541ac43\""
Dec 13 09:11:26.380971 kubelet[2151]: E1213 09:11:26.380750    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:26.384904 containerd[1473]: time="2024-12-13T09:11:26.384680069Z" level=info msg="CreateContainer within sandbox \"37cccdde285abed60ba569366154efd5b92b9f19469eb7a0f74e8576d541ac43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 09:11:26.397063 containerd[1473]: time="2024-12-13T09:11:26.396649565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-7-516c4b3017,Uid:452b2e1358c7e9f80977d484e05d540b,Namespace:kube-system,Attempt:0,} returns sandbox id \"970bda6c5af30118df4e56f3d5a1d377a6a3a4f0cf160c25f7c0942a131f1b83\""
Dec 13 09:11:26.398372 kubelet[2151]: E1213 09:11:26.398245    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:26.401729 containerd[1473]: time="2024-12-13T09:11:26.401190079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-7-516c4b3017,Uid:11e4248dbe64b1f90198a1904f8f3b42,Namespace:kube-system,Attempt:0,} returns sandbox id \"9054b482aba4d671c8225d67ad29e10a3ee3548b77e30dfccd8725689e721ff9\""
Dec 13 09:11:26.403169 containerd[1473]: time="2024-12-13T09:11:26.403079937Z" level=info msg="CreateContainer within sandbox \"970bda6c5af30118df4e56f3d5a1d377a6a3a4f0cf160c25f7c0942a131f1b83\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 09:11:26.404545 kubelet[2151]: E1213 09:11:26.403410    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:26.407412 containerd[1473]: time="2024-12-13T09:11:26.407174011Z" level=info msg="CreateContainer within sandbox \"9054b482aba4d671c8225d67ad29e10a3ee3548b77e30dfccd8725689e721ff9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 09:11:26.423644 containerd[1473]: time="2024-12-13T09:11:26.423396686Z" level=info msg="CreateContainer within sandbox \"37cccdde285abed60ba569366154efd5b92b9f19469eb7a0f74e8576d541ac43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63e10ec524f5d0275ed30571fd8a11b8af62aeb99c7bf03e10bd8268c026be0e\""
Dec 13 09:11:26.425066 containerd[1473]: time="2024-12-13T09:11:26.424787962Z" level=info msg="StartContainer for \"63e10ec524f5d0275ed30571fd8a11b8af62aeb99c7bf03e10bd8268c026be0e\""
Dec 13 09:11:26.433738 containerd[1473]: time="2024-12-13T09:11:26.433455214Z" level=info msg="CreateContainer within sandbox \"970bda6c5af30118df4e56f3d5a1d377a6a3a4f0cf160c25f7c0942a131f1b83\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"91f17c1a2d3b8487e163a126e4141854a4fc5f34ad2589b83ad9cd68f0706069\""
Dec 13 09:11:26.435327 containerd[1473]: time="2024-12-13T09:11:26.435076430Z" level=info msg="StartContainer for \"91f17c1a2d3b8487e163a126e4141854a4fc5f34ad2589b83ad9cd68f0706069\""
Dec 13 09:11:26.438985 containerd[1473]: time="2024-12-13T09:11:26.438907110Z" level=info msg="CreateContainer within sandbox \"9054b482aba4d671c8225d67ad29e10a3ee3548b77e30dfccd8725689e721ff9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9208a3136d829bca7e98bb69d45ee9e26326ca20ef459eaf40ea18c74cd9fa31\""
Dec 13 09:11:26.441330 containerd[1473]: time="2024-12-13T09:11:26.440143427Z" level=info msg="StartContainer for \"9208a3136d829bca7e98bb69d45ee9e26326ca20ef459eaf40ea18c74cd9fa31\""
Dec 13 09:11:26.447857 kubelet[2151]: W1213 09:11:26.447771    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.145.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-516c4b3017&limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:26.448541 kubelet[2151]: E1213 09:11:26.448496    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.145.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-516c4b3017&limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:26.456272 kubelet[2151]: W1213 09:11:26.456078    2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.145.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.145.99:6443: connect: connection refused
Dec 13 09:11:26.457547 kubelet[2151]: E1213 09:11:26.456602    2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.145.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.145.99:6443: connect: connection refused" logger="UnhandledError"
Dec 13 09:11:26.497281 systemd[1]: Started cri-containerd-63e10ec524f5d0275ed30571fd8a11b8af62aeb99c7bf03e10bd8268c026be0e.scope - libcontainer container 63e10ec524f5d0275ed30571fd8a11b8af62aeb99c7bf03e10bd8268c026be0e.
Dec 13 09:11:26.498886 kubelet[2151]: I1213 09:11:26.498829    2151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:26.501375 kubelet[2151]: E1213 09:11:26.501096    2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.145.99:6443/api/v1/nodes\": dial tcp 165.232.145.99:6443: connect: connection refused" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:26.514718 systemd[1]: Started cri-containerd-9208a3136d829bca7e98bb69d45ee9e26326ca20ef459eaf40ea18c74cd9fa31.scope - libcontainer container 9208a3136d829bca7e98bb69d45ee9e26326ca20ef459eaf40ea18c74cd9fa31.
Dec 13 09:11:26.528551 systemd[1]: Started cri-containerd-91f17c1a2d3b8487e163a126e4141854a4fc5f34ad2589b83ad9cd68f0706069.scope - libcontainer container 91f17c1a2d3b8487e163a126e4141854a4fc5f34ad2589b83ad9cd68f0706069.
Dec 13 09:11:26.603870 containerd[1473]: time="2024-12-13T09:11:26.603815413Z" level=info msg="StartContainer for \"91f17c1a2d3b8487e163a126e4141854a4fc5f34ad2589b83ad9cd68f0706069\" returns successfully"
Dec 13 09:11:26.636758 containerd[1473]: time="2024-12-13T09:11:26.636309964Z" level=info msg="StartContainer for \"9208a3136d829bca7e98bb69d45ee9e26326ca20ef459eaf40ea18c74cd9fa31\" returns successfully"
Dec 13 09:11:26.659874 containerd[1473]: time="2024-12-13T09:11:26.659716899Z" level=info msg="StartContainer for \"63e10ec524f5d0275ed30571fd8a11b8af62aeb99c7bf03e10bd8268c026be0e\" returns successfully"
Dec 13 09:11:26.955346 kubelet[2151]: E1213 09:11:26.955181    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:26.957203 kubelet[2151]: E1213 09:11:26.956983    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:26.961161 kubelet[2151]: E1213 09:11:26.961106    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:27.965347 kubelet[2151]: E1213 09:11:27.964375    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:27.965347 kubelet[2151]: E1213 09:11:27.964915    2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:28.105483 kubelet[2151]: I1213 09:11:28.103603    2151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:28.809677 kubelet[2151]: E1213 09:11:28.809614    2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-7-516c4b3017\" not found" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:28.882886 kubelet[2151]: E1213 09:11:28.882584    2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-7-516c4b3017.1810b18f238b4835  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-516c4b3017,UID:ci-4081.2.1-7-516c4b3017,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-516c4b3017,},FirstTimestamp:2024-12-13 09:11:24.872411189 +0000 UTC m=+0.875768833,LastTimestamp:2024-12-13 09:11:24.872411189 +0000 UTC m=+0.875768833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-516c4b3017,}"
Dec 13 09:11:28.945160 kubelet[2151]: E1213 09:11:28.944681    2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-7-516c4b3017.1810b18f246bdd81  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-516c4b3017,UID:ci-4081.2.1-7-516c4b3017,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-516c4b3017,},FirstTimestamp:2024-12-13 09:11:24.887129473 +0000 UTC m=+0.890487122,LastTimestamp:2024-12-13 09:11:24.887129473 +0000 UTC m=+0.890487122,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-516c4b3017,}"
Dec 13 09:11:28.953333 kubelet[2151]: I1213 09:11:28.950772    2151 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:28.953333 kubelet[2151]: E1213 09:11:28.950843    2151 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.2.1-7-516c4b3017\": node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:28.972000 kubelet[2151]: E1213 09:11:28.971939    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.007020 kubelet[2151]: E1213 09:11:29.006614    2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-7-516c4b3017.1810b18f274b32e4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-516c4b3017,UID:ci-4081.2.1-7-516c4b3017,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.2.1-7-516c4b3017 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-516c4b3017,},FirstTimestamp:2024-12-13 09:11:24.935320292 +0000 UTC m=+0.938677937,LastTimestamp:2024-12-13 09:11:24.935320292 +0000 UTC m=+0.938677937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-516c4b3017,}"
Dec 13 09:11:29.064264 kubelet[2151]: E1213 09:11:29.062577    2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-7-516c4b3017.1810b18f274b57c0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-516c4b3017,UID:ci-4081.2.1-7-516c4b3017,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081.2.1-7-516c4b3017 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-516c4b3017,},FirstTimestamp:2024-12-13 09:11:24.935329728 +0000 UTC m=+0.938687369,LastTimestamp:2024-12-13 09:11:24.935329728 +0000 UTC m=+0.938687369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-516c4b3017,}"
Dec 13 09:11:29.073013 kubelet[2151]: E1213 09:11:29.072911    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.173176 kubelet[2151]: E1213 09:11:29.173099    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.274231 kubelet[2151]: E1213 09:11:29.274157    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.374405 kubelet[2151]: E1213 09:11:29.374333    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.475237 kubelet[2151]: E1213 09:11:29.475138    2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:29.874612 kubelet[2151]: I1213 09:11:29.874280    2151 apiserver.go:52] "Watching apiserver"
Dec 13 09:11:29.886997 kubelet[2151]: I1213 09:11:29.886939    2151 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 09:11:30.959638 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)...
Dec 13 09:11:30.959664 systemd[1]: Reloading...
Dec 13 09:11:31.085358 zram_generator::config[2476]: No configuration found.
Dec 13 09:11:31.225482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 09:11:31.340593 systemd[1]: Reloading finished in 380 ms.
Dec 13 09:11:31.398229 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:31.414234 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 09:11:31.414775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:31.414935 systemd[1]: kubelet.service: Consumed 1.360s CPU time, 113.0M memory peak, 0B memory swap peak.
Dec 13 09:11:31.422791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 09:11:31.607763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 09:11:31.623036 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 09:11:31.706544 kubelet[2524]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 09:11:31.706544 kubelet[2524]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 09:11:31.706544 kubelet[2524]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 09:11:31.707096 kubelet[2524]: I1213 09:11:31.706597    2524 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 09:11:31.719768 kubelet[2524]: I1213 09:11:31.719706    2524 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 09:11:31.721256 kubelet[2524]: I1213 09:11:31.720007    2524 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 09:11:31.721256 kubelet[2524]: I1213 09:11:31.720436    2524 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 09:11:31.723090 kubelet[2524]: I1213 09:11:31.723057    2524 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 09:11:31.727569 kubelet[2524]: I1213 09:11:31.727517    2524 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 09:11:31.733981 kubelet[2524]: E1213 09:11:31.733930    2524 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 09:11:31.734218 kubelet[2524]: I1213 09:11:31.734202    2524 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 09:11:31.738721 kubelet[2524]: I1213 09:11:31.738565    2524 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 09:11:31.739092 kubelet[2524]: I1213 09:11:31.739070    2524 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 09:11:31.739461 kubelet[2524]: I1213 09:11:31.739413    2524 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 09:11:31.739952 kubelet[2524]: I1213 09:11:31.739674    2524 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-7-516c4b3017","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 09:11:31.740362 kubelet[2524]: I1213 09:11:31.740178    2524 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 09:11:31.740362 kubelet[2524]: I1213 09:11:31.740200    2524 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 09:11:31.740362 kubelet[2524]: I1213 09:11:31.740247    2524 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 09:11:31.740978 kubelet[2524]: I1213 09:11:31.740698    2524 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 09:11:31.740978 kubelet[2524]: I1213 09:11:31.740745    2524 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 09:11:31.740978 kubelet[2524]: I1213 09:11:31.740789    2524 kubelet.go:314] "Adding apiserver pod source"
Dec 13 09:11:31.740978 kubelet[2524]: I1213 09:11:31.740810    2524 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 09:11:31.749725 kubelet[2524]: I1213 09:11:31.749573    2524 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 09:11:31.752341 kubelet[2524]: I1213 09:11:31.750907    2524 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 09:11:31.752658 kubelet[2524]: I1213 09:11:31.752638    2524 server.go:1269] "Started kubelet"
Dec 13 09:11:31.759155 kubelet[2524]: I1213 09:11:31.759103    2524 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 09:11:31.761057 kubelet[2524]: I1213 09:11:31.761002    2524 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 09:11:31.765497 kubelet[2524]: I1213 09:11:31.764906    2524 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 09:11:31.772128 kubelet[2524]: I1213 09:11:31.765056    2524 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 09:11:31.773328 kubelet[2524]: I1213 09:11:31.772868    2524 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 09:11:31.773328 kubelet[2524]: I1213 09:11:31.766531    2524 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 09:11:31.773328 kubelet[2524]: I1213 09:11:31.766514    2524 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 09:11:31.773328 kubelet[2524]: E1213 09:11:31.766775    2524 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-7-516c4b3017\" not found"
Dec 13 09:11:31.773574 kubelet[2524]: I1213 09:11:31.773412    2524 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 09:11:31.779411 kubelet[2524]: I1213 09:11:31.778133    2524 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 09:11:31.787135 kubelet[2524]: I1213 09:11:31.779471    2524 factory.go:221] Registration of the systemd container factory successfully
Dec 13 09:11:31.789142 kubelet[2524]: I1213 09:11:31.787885    2524 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 09:11:31.795340 kubelet[2524]: I1213 09:11:31.793851    2524 factory.go:221] Registration of the containerd container factory successfully
Dec 13 09:11:31.799383 kubelet[2524]: I1213 09:11:31.798057    2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 09:11:31.799678 kubelet[2524]: I1213 09:11:31.799645    2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 09:11:31.799678 kubelet[2524]: I1213 09:11:31.799681    2524 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 09:11:31.799811 kubelet[2524]: I1213 09:11:31.799701    2524 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 09:11:31.799811 kubelet[2524]: E1213 09:11:31.799755    2524 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 09:11:31.812965 kubelet[2524]: E1213 09:11:31.812891    2524 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889396    2524 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889425    2524 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889457    2524 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889699    2524 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889715    2524 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 09:11:31.889992 kubelet[2524]: I1213 09:11:31.889743    2524 policy_none.go:49] "None policy: Start"
Dec 13 09:11:31.890876 kubelet[2524]: I1213 09:11:31.890856    2524 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 09:11:31.891009 kubelet[2524]: I1213 09:11:31.891001    2524 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 09:11:31.891372 kubelet[2524]: I1213 09:11:31.891354    2524 state_mem.go:75] "Updated machine memory state"
Dec 13 09:11:31.898513 kubelet[2524]: I1213 09:11:31.898475    2524 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 09:11:31.900372 kubelet[2524]: I1213 09:11:31.899708    2524 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 09:11:31.900372 kubelet[2524]: I1213 09:11:31.899731    2524 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 09:11:31.900372 kubelet[2524]: I1213 09:11:31.900195    2524 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 09:11:31.924230 kubelet[2524]: W1213 09:11:31.924181    2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 09:11:31.924468 kubelet[2524]: W1213 09:11:31.924268    2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 09:11:31.924552 kubelet[2524]: W1213 09:11:31.924523    2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 09:11:31.978825 kubelet[2524]: I1213 09:11:31.977142    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.978825 kubelet[2524]: I1213 09:11:31.978546    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.978825 kubelet[2524]: I1213 09:11:31.978577    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.978825 kubelet[2524]: I1213 09:11:31.978596    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.978825 kubelet[2524]: I1213 09:11:31.978626    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.979226 kubelet[2524]: I1213 09:11:31.978654    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.979226 kubelet[2524]: I1213 09:11:31.978682    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11e4248dbe64b1f90198a1904f8f3b42-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-7-516c4b3017\" (UID: \"11e4248dbe64b1f90198a1904f8f3b42\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.979226 kubelet[2524]: I1213 09:11:31.978709    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6f95b01e32296b917fd901e703bc0cf-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-7-516c4b3017\" (UID: \"e6f95b01e32296b917fd901e703bc0cf\") " pod="kube-system/kube-scheduler-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:31.979226 kubelet[2524]: I1213 09:11:31.978734    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452b2e1358c7e9f80977d484e05d540b-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-7-516c4b3017\" (UID: \"452b2e1358c7e9f80977d484e05d540b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:32.015415 kubelet[2524]: I1213 09:11:32.014936    2524 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:32.032086 kubelet[2524]: I1213 09:11:32.031518    2524 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:32.032086 kubelet[2524]: I1213 09:11:32.031630    2524 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-7-516c4b3017"
Dec 13 09:11:32.226154 kubelet[2524]: E1213 09:11:32.225995    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:32.228352 kubelet[2524]: E1213 09:11:32.226691    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:32.228352 kubelet[2524]: E1213 09:11:32.227784    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:33.603288 systemd-resolved[1324]: Clock change detected. Flushing caches.
Dec 13 09:11:33.603509 systemd-timesyncd[1345]: Contacted time server 144.202.41.38:123 (2.flatcar.pool.ntp.org).
Dec 13 09:11:33.603581 systemd-timesyncd[1345]: Initial clock synchronization to Fri 2024-12-13 09:11:33.603167 UTC.
Dec 13 09:11:33.651385 kubelet[2524]: I1213 09:11:33.651303    2524 apiserver.go:52] "Watching apiserver"
Dec 13 09:11:33.676408 kubelet[2524]: I1213 09:11:33.676331    2524 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 09:11:33.754067 kubelet[2524]: I1213 09:11:33.751573    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-7-516c4b3017" podStartSLOduration=2.751540747 podStartE2EDuration="2.751540747s" podCreationTimestamp="2024-12-13 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:33.75089651 +0000 UTC m=+1.202050149" watchObservedRunningTime="2024-12-13 09:11:33.751540747 +0000 UTC m=+1.202694373"
Dec 13 09:11:33.754067 kubelet[2524]: I1213 09:11:33.751750    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-7-516c4b3017" podStartSLOduration=2.751742813 podStartE2EDuration="2.751742813s" podCreationTimestamp="2024-12-13 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:33.733306279 +0000 UTC m=+1.184459914" watchObservedRunningTime="2024-12-13 09:11:33.751742813 +0000 UTC m=+1.202896455"
Dec 13 09:11:33.767190 kubelet[2524]: E1213 09:11:33.766872    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:33.771060 kubelet[2524]: E1213 09:11:33.768631    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:33.771400 kubelet[2524]: E1213 09:11:33.768693    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:33.808510 kubelet[2524]: I1213 09:11:33.808279    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-7-516c4b3017" podStartSLOduration=2.80825112 podStartE2EDuration="2.80825112s" podCreationTimestamp="2024-12-13 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:33.787755229 +0000 UTC m=+1.238908860" watchObservedRunningTime="2024-12-13 09:11:33.80825112 +0000 UTC m=+1.259404763"
Dec 13 09:11:34.775505 kubelet[2524]: E1213 09:11:34.774900    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:37.676306 kubelet[2524]: I1213 09:11:37.675965    2524 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 09:11:37.677058 kubelet[2524]: I1213 09:11:37.676883    2524 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 09:11:37.677119 containerd[1473]: time="2024-12-13T09:11:37.676593721Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 09:11:37.699946 sudo[1655]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:37.706369 sshd[1652]: pam_unix(sshd:session): session closed for user core
Dec 13 09:11:37.713118 systemd[1]: sshd@6-165.232.145.99:22-147.75.109.163:57966.service: Deactivated successfully.
Dec 13 09:11:37.719155 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 09:11:37.720402 systemd[1]: session-7.scope: Consumed 5.763s CPU time, 150.0M memory peak, 0B memory swap peak.
Dec 13 09:11:37.725491 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit.
Dec 13 09:11:37.729141 systemd-logind[1450]: Removed session 7.
Dec 13 09:11:38.490930 systemd[1]: Created slice kubepods-besteffort-pod466c3c4e_e44d_49dc_b37d_4335cfedd3b4.slice - libcontainer container kubepods-besteffort-pod466c3c4e_e44d_49dc_b37d_4335cfedd3b4.slice.
Dec 13 09:11:38.622668 kubelet[2524]: I1213 09:11:38.622612    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/466c3c4e-e44d-49dc-b37d-4335cfedd3b4-kube-proxy\") pod \"kube-proxy-wsmbz\" (UID: \"466c3c4e-e44d-49dc-b37d-4335cfedd3b4\") " pod="kube-system/kube-proxy-wsmbz"
Dec 13 09:11:38.622668 kubelet[2524]: I1213 09:11:38.622669    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/466c3c4e-e44d-49dc-b37d-4335cfedd3b4-lib-modules\") pod \"kube-proxy-wsmbz\" (UID: \"466c3c4e-e44d-49dc-b37d-4335cfedd3b4\") " pod="kube-system/kube-proxy-wsmbz"
Dec 13 09:11:38.622921 kubelet[2524]: I1213 09:11:38.622708    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwkn\" (UniqueName: \"kubernetes.io/projected/466c3c4e-e44d-49dc-b37d-4335cfedd3b4-kube-api-access-jpwkn\") pod \"kube-proxy-wsmbz\" (UID: \"466c3c4e-e44d-49dc-b37d-4335cfedd3b4\") " pod="kube-system/kube-proxy-wsmbz"
Dec 13 09:11:38.622921 kubelet[2524]: I1213 09:11:38.622738    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/466c3c4e-e44d-49dc-b37d-4335cfedd3b4-xtables-lock\") pod \"kube-proxy-wsmbz\" (UID: \"466c3c4e-e44d-49dc-b37d-4335cfedd3b4\") " pod="kube-system/kube-proxy-wsmbz"
Dec 13 09:11:38.811510 kubelet[2524]: E1213 09:11:38.810882    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:38.814149 containerd[1473]: time="2024-12-13T09:11:38.813509063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsmbz,Uid:466c3c4e-e44d-49dc-b37d-4335cfedd3b4,Namespace:kube-system,Attempt:0,}"
Dec 13 09:11:38.832732 systemd[1]: Created slice kubepods-besteffort-pod7cfd7c91_a496_4a5b_b89b_95f85126f1c7.slice - libcontainer container kubepods-besteffort-pod7cfd7c91_a496_4a5b_b89b_95f85126f1c7.slice.
Dec 13 09:11:38.889880 containerd[1473]: time="2024-12-13T09:11:38.889434418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:38.889880 containerd[1473]: time="2024-12-13T09:11:38.889680183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:38.889880 containerd[1473]: time="2024-12-13T09:11:38.889730263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:38.890842 containerd[1473]: time="2024-12-13T09:11:38.890327080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:38.923978 systemd[1]: run-containerd-runc-k8s.io-ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9-runc.HkHUuc.mount: Deactivated successfully.
Dec 13 09:11:38.928431 kubelet[2524]: I1213 09:11:38.927967    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7cfd7c91-a496-4a5b-b89b-95f85126f1c7-var-lib-calico\") pod \"tigera-operator-76c4976dd7-fg6dd\" (UID: \"7cfd7c91-a496-4a5b-b89b-95f85126f1c7\") " pod="tigera-operator/tigera-operator-76c4976dd7-fg6dd"
Dec 13 09:11:38.928431 kubelet[2524]: I1213 09:11:38.928036    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdq8t\" (UniqueName: \"kubernetes.io/projected/7cfd7c91-a496-4a5b-b89b-95f85126f1c7-kube-api-access-hdq8t\") pod \"tigera-operator-76c4976dd7-fg6dd\" (UID: \"7cfd7c91-a496-4a5b-b89b-95f85126f1c7\") " pod="tigera-operator/tigera-operator-76c4976dd7-fg6dd"
Dec 13 09:11:38.941499 systemd[1]: Started cri-containerd-ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9.scope - libcontainer container ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9.
Dec 13 09:11:38.993649 containerd[1473]: time="2024-12-13T09:11:38.993547372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsmbz,Uid:466c3c4e-e44d-49dc-b37d-4335cfedd3b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9\""
Dec 13 09:11:38.995881 kubelet[2524]: E1213 09:11:38.995807    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:39.007691 containerd[1473]: time="2024-12-13T09:11:39.007615602Z" level=info msg="CreateContainer within sandbox \"ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 09:11:39.075748 containerd[1473]: time="2024-12-13T09:11:39.075516263Z" level=info msg="CreateContainer within sandbox \"ea5deeae18dd741a708b92afa9363a6f78ee702357d6c66ac8d6b53f53f62ab9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37eb23f8f990bfec257d2e43100e26571b6becfd5896748b31378ce0218a1739\""
Dec 13 09:11:39.079172 containerd[1473]: time="2024-12-13T09:11:39.077264298Z" level=info msg="StartContainer for \"37eb23f8f990bfec257d2e43100e26571b6becfd5896748b31378ce0218a1739\""
Dec 13 09:11:39.129415 systemd[1]: Started cri-containerd-37eb23f8f990bfec257d2e43100e26571b6becfd5896748b31378ce0218a1739.scope - libcontainer container 37eb23f8f990bfec257d2e43100e26571b6becfd5896748b31378ce0218a1739.
Dec 13 09:11:39.172930 containerd[1473]: time="2024-12-13T09:11:39.172866572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-fg6dd,Uid:7cfd7c91-a496-4a5b-b89b-95f85126f1c7,Namespace:tigera-operator,Attempt:0,}"
Dec 13 09:11:39.186090 containerd[1473]: time="2024-12-13T09:11:39.185986818Z" level=info msg="StartContainer for \"37eb23f8f990bfec257d2e43100e26571b6becfd5896748b31378ce0218a1739\" returns successfully"
Dec 13 09:11:39.225386 containerd[1473]: time="2024-12-13T09:11:39.225133573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:39.225386 containerd[1473]: time="2024-12-13T09:11:39.225233535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:39.225386 containerd[1473]: time="2024-12-13T09:11:39.225290440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:39.225985 containerd[1473]: time="2024-12-13T09:11:39.225714683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:39.289178 systemd[1]: Started cri-containerd-d8105acac09726e485d2d1ef3c86e093081f82ff85254833569b969ecc83b299.scope - libcontainer container d8105acac09726e485d2d1ef3c86e093081f82ff85254833569b969ecc83b299.
Dec 13 09:11:39.425250 containerd[1473]: time="2024-12-13T09:11:39.424840950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-fg6dd,Uid:7cfd7c91-a496-4a5b-b89b-95f85126f1c7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d8105acac09726e485d2d1ef3c86e093081f82ff85254833569b969ecc83b299\""
Dec 13 09:11:39.432391 containerd[1473]: time="2024-12-13T09:11:39.432016406Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Dec 13 09:11:39.818427 kubelet[2524]: E1213 09:11:39.818337    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:41.078339 update_engine[1452]: I20241213 09:11:41.078132  1452 update_attempter.cc:509] Updating boot flags...
Dec 13 09:11:41.123649 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2853)
Dec 13 09:11:41.495622 kubelet[2524]: E1213 09:11:41.495149    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:41.524858 kubelet[2524]: I1213 09:11:41.524778    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wsmbz" podStartSLOduration=3.52474999 podStartE2EDuration="3.52474999s" podCreationTimestamp="2024-12-13 09:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:39.848340598 +0000 UTC m=+7.299494247" watchObservedRunningTime="2024-12-13 09:11:41.52474999 +0000 UTC m=+8.975903618"
Dec 13 09:11:41.826661 kubelet[2524]: E1213 09:11:41.826572    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:41.883797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690384504.mount: Deactivated successfully.
Dec 13 09:11:42.480096 kubelet[2524]: E1213 09:11:42.479959    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:42.639884 kubelet[2524]: E1213 09:11:42.638216    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:42.827086 kubelet[2524]: E1213 09:11:42.825700    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:42.827086 kubelet[2524]: E1213 09:11:42.826500    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:44.299690 containerd[1473]: time="2024-12-13T09:11:44.299594110Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:44.301716 containerd[1473]: time="2024-12-13T09:11:44.300979267Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763749"
Dec 13 09:11:44.304150 containerd[1473]: time="2024-12-13T09:11:44.303376297Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:44.315278 containerd[1473]: time="2024-12-13T09:11:44.315201646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:44.316218 containerd[1473]: time="2024-12-13T09:11:44.316146156Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.884021455s"
Dec 13 09:11:44.316218 containerd[1473]: time="2024-12-13T09:11:44.316201245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\""
Dec 13 09:11:44.320728 containerd[1473]: time="2024-12-13T09:11:44.320185613Z" level=info msg="CreateContainer within sandbox \"d8105acac09726e485d2d1ef3c86e093081f82ff85254833569b969ecc83b299\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Dec 13 09:11:44.349271 containerd[1473]: time="2024-12-13T09:11:44.349174963Z" level=info msg="CreateContainer within sandbox \"d8105acac09726e485d2d1ef3c86e093081f82ff85254833569b969ecc83b299\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb6834d4e2cbc21e49c50264ca5313d535b3251a7279a3b26260207197832dee\""
Dec 13 09:11:44.352280 containerd[1473]: time="2024-12-13T09:11:44.352187109Z" level=info msg="StartContainer for \"cb6834d4e2cbc21e49c50264ca5313d535b3251a7279a3b26260207197832dee\""
Dec 13 09:11:44.407552 systemd[1]: Started cri-containerd-cb6834d4e2cbc21e49c50264ca5313d535b3251a7279a3b26260207197832dee.scope - libcontainer container cb6834d4e2cbc21e49c50264ca5313d535b3251a7279a3b26260207197832dee.
Dec 13 09:11:44.452979 containerd[1473]: time="2024-12-13T09:11:44.452429378Z" level=info msg="StartContainer for \"cb6834d4e2cbc21e49c50264ca5313d535b3251a7279a3b26260207197832dee\" returns successfully"
Dec 13 09:11:44.851841 kubelet[2524]: I1213 09:11:44.851769    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-fg6dd" podStartSLOduration=1.96333294 podStartE2EDuration="6.85174889s" podCreationTimestamp="2024-12-13 09:11:38 +0000 UTC" firstStartedPulling="2024-12-13 09:11:39.429398155 +0000 UTC m=+6.880551775" lastFinishedPulling="2024-12-13 09:11:44.317814112 +0000 UTC m=+11.768967725" observedRunningTime="2024-12-13 09:11:44.851677726 +0000 UTC m=+12.302831369" watchObservedRunningTime="2024-12-13 09:11:44.85174889 +0000 UTC m=+12.302902522"
Dec 13 09:11:48.206214 systemd[1]: Created slice kubepods-besteffort-pod94b502ba_99bf_4f29_be51_30aeb4c09ead.slice - libcontainer container kubepods-besteffort-pod94b502ba_99bf_4f29_be51_30aeb4c09ead.slice.
Dec 13 09:11:48.304326 kubelet[2524]: I1213 09:11:48.304240    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/94b502ba-99bf-4f29-be51-30aeb4c09ead-typha-certs\") pod \"calico-typha-77f6ddb4d6-78zbn\" (UID: \"94b502ba-99bf-4f29-be51-30aeb4c09ead\") " pod="calico-system/calico-typha-77f6ddb4d6-78zbn"
Dec 13 09:11:48.304326 kubelet[2524]: I1213 09:11:48.304327    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b502ba-99bf-4f29-be51-30aeb4c09ead-tigera-ca-bundle\") pod \"calico-typha-77f6ddb4d6-78zbn\" (UID: \"94b502ba-99bf-4f29-be51-30aeb4c09ead\") " pod="calico-system/calico-typha-77f6ddb4d6-78zbn"
Dec 13 09:11:48.304906 kubelet[2524]: I1213 09:11:48.304366    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ccw\" (UniqueName: \"kubernetes.io/projected/94b502ba-99bf-4f29-be51-30aeb4c09ead-kube-api-access-66ccw\") pod \"calico-typha-77f6ddb4d6-78zbn\" (UID: \"94b502ba-99bf-4f29-be51-30aeb4c09ead\") " pod="calico-system/calico-typha-77f6ddb4d6-78zbn"
Dec 13 09:11:48.488523 systemd[1]: Created slice kubepods-besteffort-podec6d9fc3_49f1_4482_ad2e_fcffec4b6d62.slice - libcontainer container kubepods-besteffort-podec6d9fc3_49f1_4482_ad2e_fcffec4b6d62.slice.
Dec 13 09:11:48.517690 kubelet[2524]: E1213 09:11:48.514973    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:48.517892 containerd[1473]: time="2024-12-13T09:11:48.516377118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f6ddb4d6-78zbn,Uid:94b502ba-99bf-4f29-be51-30aeb4c09ead,Namespace:calico-system,Attempt:0,}"
Dec 13 09:11:48.602058 containerd[1473]: time="2024-12-13T09:11:48.601587806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:48.602058 containerd[1473]: time="2024-12-13T09:11:48.601703838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:48.602058 containerd[1473]: time="2024-12-13T09:11:48.601727938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:48.602058 containerd[1473]: time="2024-12-13T09:11:48.601898253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:48.605330 kubelet[2524]: I1213 09:11:48.605262    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-node-certs\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605330 kubelet[2524]: I1213 09:11:48.605336    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-flexvol-driver-host\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605587 kubelet[2524]: I1213 09:11:48.605375    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gb9\" (UniqueName: \"kubernetes.io/projected/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-kube-api-access-m4gb9\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605587 kubelet[2524]: I1213 09:11:48.605404    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-lib-modules\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605587 kubelet[2524]: I1213 09:11:48.605430    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-xtables-lock\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605587 kubelet[2524]: I1213 09:11:48.605458    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-var-lib-calico\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605587 kubelet[2524]: I1213 09:11:48.605486    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-var-run-calico\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605771 kubelet[2524]: I1213 09:11:48.605517    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-cni-bin-dir\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605771 kubelet[2524]: I1213 09:11:48.605540    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-cni-net-dir\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605771 kubelet[2524]: I1213 09:11:48.605563    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-cni-log-dir\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605771 kubelet[2524]: I1213 09:11:48.605595    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-tigera-ca-bundle\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.605771 kubelet[2524]: I1213 09:11:48.605622    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62-policysync\") pod \"calico-node-gw94v\" (UID: \"ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62\") " pod="calico-system/calico-node-gw94v"
Dec 13 09:11:48.657790 kubelet[2524]: E1213 09:11:48.656900    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:11:48.684896 systemd[1]: Started cri-containerd-f26165d20c5aaadee15d44454304f71e2c1ef9604d45e95119c4c03f95460cfe.scope - libcontainer container f26165d20c5aaadee15d44454304f71e2c1ef9604d45e95119c4c03f95460cfe.
Dec 13 09:11:48.718460 kubelet[2524]: E1213 09:11:48.718005    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.718460 kubelet[2524]: W1213 09:11:48.718124    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.718460 kubelet[2524]: E1213 09:11:48.718193    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.719144 kubelet[2524]: E1213 09:11:48.719113    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.719287 kubelet[2524]: W1213 09:11:48.719266    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.719491 kubelet[2524]: E1213 09:11:48.719363    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.719757 kubelet[2524]: E1213 09:11:48.719740    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.719852 kubelet[2524]: W1213 09:11:48.719837    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.720056 kubelet[2524]: E1213 09:11:48.719927    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.720293 kubelet[2524]: E1213 09:11:48.720278    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.720634 kubelet[2524]: W1213 09:11:48.720477    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.720634 kubelet[2524]: E1213 09:11:48.720505    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.720939 kubelet[2524]: E1213 09:11:48.720924    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.721126 kubelet[2524]: W1213 09:11:48.721007    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.721277 kubelet[2524]: E1213 09:11:48.721263    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.721576 kubelet[2524]: E1213 09:11:48.721396    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.721576 kubelet[2524]: W1213 09:11:48.721458    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.721576 kubelet[2524]: E1213 09:11:48.721473    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.722281 kubelet[2524]: E1213 09:11:48.722114    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.722281 kubelet[2524]: W1213 09:11:48.722140    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.722281 kubelet[2524]: E1213 09:11:48.722156    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.723070 kubelet[2524]: E1213 09:11:48.722901    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.723070 kubelet[2524]: W1213 09:11:48.722922    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.723070 kubelet[2524]: E1213 09:11:48.722939    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.723449 kubelet[2524]: E1213 09:11:48.723367    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.723449 kubelet[2524]: W1213 09:11:48.723381    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.723449 kubelet[2524]: E1213 09:11:48.723394    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.723942 kubelet[2524]: E1213 09:11:48.723721    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.723942 kubelet[2524]: W1213 09:11:48.723733    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.723942 kubelet[2524]: E1213 09:11:48.723744    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.724439 kubelet[2524]: E1213 09:11:48.724320    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.724439 kubelet[2524]: W1213 09:11:48.724335    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.724439 kubelet[2524]: E1213 09:11:48.724347    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.725062 kubelet[2524]: E1213 09:11:48.724959    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.725062 kubelet[2524]: W1213 09:11:48.724976    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.725062 kubelet[2524]: E1213 09:11:48.724990    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.725702 kubelet[2524]: E1213 09:11:48.725613    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.725702 kubelet[2524]: W1213 09:11:48.725628    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.725702 kubelet[2524]: E1213 09:11:48.725648    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.726311 kubelet[2524]: E1213 09:11:48.726193    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.726311 kubelet[2524]: W1213 09:11:48.726205    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.726311 kubelet[2524]: E1213 09:11:48.726218    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.726684 kubelet[2524]: E1213 09:11:48.726580    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.726684 kubelet[2524]: W1213 09:11:48.726592    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.726684 kubelet[2524]: E1213 09:11:48.726603    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.727114 kubelet[2524]: E1213 09:11:48.727006    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.727114 kubelet[2524]: W1213 09:11:48.727044    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.727114 kubelet[2524]: E1213 09:11:48.727059    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.727784 kubelet[2524]: E1213 09:11:48.727576    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.727784 kubelet[2524]: W1213 09:11:48.727647    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.727784 kubelet[2524]: E1213 09:11:48.727663    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.728257 kubelet[2524]: E1213 09:11:48.728138    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.728257 kubelet[2524]: W1213 09:11:48.728150    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.728257 kubelet[2524]: E1213 09:11:48.728166    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.728757 kubelet[2524]: E1213 09:11:48.728632    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.728757 kubelet[2524]: W1213 09:11:48.728650    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.728757 kubelet[2524]: E1213 09:11:48.728694    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.729238 kubelet[2524]: E1213 09:11:48.729134    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.729238 kubelet[2524]: W1213 09:11:48.729146    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.729238 kubelet[2524]: E1213 09:11:48.729157    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.729658 kubelet[2524]: E1213 09:11:48.729565    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.729658 kubelet[2524]: W1213 09:11:48.729581    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.729658 kubelet[2524]: E1213 09:11:48.729597    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.730169 kubelet[2524]: E1213 09:11:48.730003    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.730169 kubelet[2524]: W1213 09:11:48.730019    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.730169 kubelet[2524]: E1213 09:11:48.730051    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.758095 kubelet[2524]: E1213 09:11:48.755567    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.758095 kubelet[2524]: W1213 09:11:48.755608    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.758095 kubelet[2524]: E1213 09:11:48.755642    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.798411 kubelet[2524]: E1213 09:11:48.797888    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:48.799593 containerd[1473]: time="2024-12-13T09:11:48.799011441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gw94v,Uid:ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62,Namespace:calico-system,Attempt:0,}"
Dec 13 09:11:48.808640 kubelet[2524]: E1213 09:11:48.808284    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.808640 kubelet[2524]: W1213 09:11:48.808317    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.808640 kubelet[2524]: E1213 09:11:48.808350    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.808640 kubelet[2524]: I1213 09:11:48.808405    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0399f05a-b42a-4620-afd9-27c69d03846d-registration-dir\") pod \"csi-node-driver-bbgp8\" (UID: \"0399f05a-b42a-4620-afd9-27c69d03846d\") " pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:11:48.809541 kubelet[2524]: E1213 09:11:48.809488    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.810465 kubelet[2524]: W1213 09:11:48.809538    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.810465 kubelet[2524]: E1213 09:11:48.809615    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.810465 kubelet[2524]: E1213 09:11:48.810286    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.810465 kubelet[2524]: W1213 09:11:48.810305    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.810923 kubelet[2524]: E1213 09:11:48.810857    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.811460 kubelet[2524]: E1213 09:11:48.811375    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.811699 kubelet[2524]: W1213 09:11:48.811403    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.813384 kubelet[2524]: E1213 09:11:48.811714    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.813384 kubelet[2524]: I1213 09:11:48.811796    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0399f05a-b42a-4620-afd9-27c69d03846d-socket-dir\") pod \"csi-node-driver-bbgp8\" (UID: \"0399f05a-b42a-4620-afd9-27c69d03846d\") " pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:11:48.813384 kubelet[2524]: E1213 09:11:48.812241    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.813384 kubelet[2524]: W1213 09:11:48.812262    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.813384 kubelet[2524]: E1213 09:11:48.812299    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.813384 kubelet[2524]: I1213 09:11:48.812330    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xw4\" (UniqueName: \"kubernetes.io/projected/0399f05a-b42a-4620-afd9-27c69d03846d-kube-api-access-x4xw4\") pod \"csi-node-driver-bbgp8\" (UID: \"0399f05a-b42a-4620-afd9-27c69d03846d\") " pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:11:48.813384 kubelet[2524]: E1213 09:11:48.813050    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.813384 kubelet[2524]: W1213 09:11:48.813069    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.814737 kubelet[2524]: E1213 09:11:48.813704    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.814737 kubelet[2524]: W1213 09:11:48.813726    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.814737 kubelet[2524]: E1213 09:11:48.813756    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.814737 kubelet[2524]: I1213 09:11:48.813790    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0399f05a-b42a-4620-afd9-27c69d03846d-varrun\") pod \"csi-node-driver-bbgp8\" (UID: \"0399f05a-b42a-4620-afd9-27c69d03846d\") " pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:11:48.814737 kubelet[2524]: E1213 09:11:48.813100    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.815450 kubelet[2524]: E1213 09:11:48.814985    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.815450 kubelet[2524]: W1213 09:11:48.815008    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.815450 kubelet[2524]: E1213 09:11:48.815060    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.815450 kubelet[2524]: I1213 09:11:48.815096    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0399f05a-b42a-4620-afd9-27c69d03846d-kubelet-dir\") pod \"csi-node-driver-bbgp8\" (UID: \"0399f05a-b42a-4620-afd9-27c69d03846d\") " pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:11:48.816158 kubelet[2524]: E1213 09:11:48.816058    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.816339 kubelet[2524]: W1213 09:11:48.816221    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.816339 kubelet[2524]: E1213 09:11:48.816272    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.817367 kubelet[2524]: E1213 09:11:48.817302    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.817367 kubelet[2524]: W1213 09:11:48.817326    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.818732 kubelet[2524]: E1213 09:11:48.818008    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.818732 kubelet[2524]: E1213 09:11:48.818400    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.818732 kubelet[2524]: W1213 09:11:48.818418    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.818732 kubelet[2524]: E1213 09:11:48.818465    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.820230 kubelet[2524]: E1213 09:11:48.819115    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.820230 kubelet[2524]: W1213 09:11:48.819130    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.820387 kubelet[2524]: E1213 09:11:48.820344    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.826454 kubelet[2524]: E1213 09:11:48.826260    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.826454 kubelet[2524]: W1213 09:11:48.826291    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.826454 kubelet[2524]: E1213 09:11:48.826324    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.827428 kubelet[2524]: E1213 09:11:48.827384    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.827428 kubelet[2524]: W1213 09:11:48.827412    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.827428 kubelet[2524]: E1213 09:11:48.827435    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.827774 kubelet[2524]: E1213 09:11:48.827704    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.827774 kubelet[2524]: W1213 09:11:48.827712    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.827774 kubelet[2524]: E1213 09:11:48.827721    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.885937 containerd[1473]: time="2024-12-13T09:11:48.885559179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:11:48.885937 containerd[1473]: time="2024-12-13T09:11:48.885669488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:11:48.886566 containerd[1473]: time="2024-12-13T09:11:48.886308911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:48.887431 containerd[1473]: time="2024-12-13T09:11:48.887127340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:11:48.918715 kubelet[2524]: E1213 09:11:48.918446    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.918715 kubelet[2524]: W1213 09:11:48.918489    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.920359 kubelet[2524]: E1213 09:11:48.918887    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.921191 kubelet[2524]: E1213 09:11:48.920820    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.921191 kubelet[2524]: W1213 09:11:48.920851    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.921191 kubelet[2524]: E1213 09:11:48.920901    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.923144 kubelet[2524]: E1213 09:11:48.922423    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.923144 kubelet[2524]: W1213 09:11:48.922450    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.923144 kubelet[2524]: E1213 09:11:48.922890    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.924331 kubelet[2524]: E1213 09:11:48.924276    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.924952 kubelet[2524]: W1213 09:11:48.924500    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.925944 kubelet[2524]: E1213 09:11:48.925534    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.925944 kubelet[2524]: W1213 09:11:48.925556    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.927928 kubelet[2524]: E1213 09:11:48.925250    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.927928 kubelet[2524]: E1213 09:11:48.927605    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.929331 kubelet[2524]: E1213 09:11:48.928897    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.929331 kubelet[2524]: W1213 09:11:48.928925    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.929331 kubelet[2524]: E1213 09:11:48.929212    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.930457 kubelet[2524]: E1213 09:11:48.930208    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.930622 kubelet[2524]: W1213 09:11:48.930594    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.931417 kubelet[2524]: E1213 09:11:48.931389    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.932455 kubelet[2524]: E1213 09:11:48.932325    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.932980 kubelet[2524]: W1213 09:11:48.932687    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.932980 kubelet[2524]: E1213 09:11:48.932787    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.934004 kubelet[2524]: E1213 09:11:48.933866    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.934004 kubelet[2524]: W1213 09:11:48.933886    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.934004 kubelet[2524]: E1213 09:11:48.933938    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.934931 kubelet[2524]: E1213 09:11:48.934727    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.934931 kubelet[2524]: W1213 09:11:48.934749    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.936890 kubelet[2524]: E1213 09:11:48.936281    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.936890 kubelet[2524]: W1213 09:11:48.936303    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.937487 kubelet[2524]: E1213 09:11:48.937233    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.937487 kubelet[2524]: E1213 09:11:48.937289    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.937487 kubelet[2524]: W1213 09:11:48.937433    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.938236 kubelet[2524]: E1213 09:11:48.937364    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.938236 kubelet[2524]: E1213 09:11:48.937710    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.939664 kubelet[2524]: E1213 09:11:48.938955    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.939664 kubelet[2524]: W1213 09:11:48.938988    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.939664 kubelet[2524]: E1213 09:11:48.939290    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.940396 kubelet[2524]: E1213 09:11:48.940057    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.940396 kubelet[2524]: W1213 09:11:48.940078    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.941321 kubelet[2524]: E1213 09:11:48.940992    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.941321 kubelet[2524]: W1213 09:11:48.941096    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.941813 kubelet[2524]: E1213 09:11:48.941707    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.942248 kubelet[2524]: W1213 09:11:48.941899    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.942597 kubelet[2524]: E1213 09:11:48.942565    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.942911 kubelet[2524]: W1213 09:11:48.942855    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.943383 systemd[1]: Started cri-containerd-5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638.scope - libcontainer container 5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638.
Dec 13 09:11:48.944766 kubelet[2524]: E1213 09:11:48.944681    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.945941 kubelet[2524]: W1213 09:11:48.945128    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.945941 kubelet[2524]: E1213 09:11:48.945299    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.946685 kubelet[2524]: E1213 09:11:48.946646    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.947768 kubelet[2524]: E1213 09:11:48.947739    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.948490 kubelet[2524]: W1213 09:11:48.947994    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.948490 kubelet[2524]: E1213 09:11:48.948062    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.948490 kubelet[2524]: E1213 09:11:48.948122    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.949278 kubelet[2524]: E1213 09:11:48.949131    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.950073 kubelet[2524]: W1213 09:11:48.949469    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.950073 kubelet[2524]: E1213 09:11:48.949504    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.950073 kubelet[2524]: E1213 09:11:48.949748    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.950073 kubelet[2524]: E1213 09:11:48.949798    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.951435 kubelet[2524]: E1213 09:11:48.951138    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.951435 kubelet[2524]: W1213 09:11:48.951200    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.951435 kubelet[2524]: E1213 09:11:48.951237    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.952931 kubelet[2524]: E1213 09:11:48.952837    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.952931 kubelet[2524]: W1213 09:11:48.952893    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.952931 kubelet[2524]: E1213 09:11:48.952928    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.953708 kubelet[2524]: E1213 09:11:48.953680    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.953708 kubelet[2524]: W1213 09:11:48.953703    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.954178 kubelet[2524]: E1213 09:11:48.953774    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.954411 kubelet[2524]: E1213 09:11:48.954387    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.954661 kubelet[2524]: W1213 09:11:48.954409    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.954661 kubelet[2524]: E1213 09:11:48.954499    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.955673 kubelet[2524]: E1213 09:11:48.955625    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.955673 kubelet[2524]: W1213 09:11:48.955649    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.956007 kubelet[2524]: E1213 09:11:48.955694    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:48.978229 kubelet[2524]: E1213 09:11:48.977961    2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 09:11:48.978229 kubelet[2524]: W1213 09:11:48.978087    2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 09:11:48.979108 kubelet[2524]: E1213 09:11:48.978318    2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 09:11:49.074116 containerd[1473]: time="2024-12-13T09:11:49.073574183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f6ddb4d6-78zbn,Uid:94b502ba-99bf-4f29-be51-30aeb4c09ead,Namespace:calico-system,Attempt:0,} returns sandbox id \"f26165d20c5aaadee15d44454304f71e2c1ef9604d45e95119c4c03f95460cfe\""
Dec 13 09:11:49.092067 containerd[1473]: time="2024-12-13T09:11:49.091795898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gw94v,Uid:ec6d9fc3-49f1-4482-ad2e-fcffec4b6d62,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\""
Dec 13 09:11:49.093652 kubelet[2524]: E1213 09:11:49.093573    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:49.100923 kubelet[2524]: E1213 09:11:49.100245    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:49.106004 containerd[1473]: time="2024-12-13T09:11:49.105914856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Dec 13 09:11:50.703606 kubelet[2524]: E1213 09:11:50.703476    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:11:50.746625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773009925.mount: Deactivated successfully.
Dec 13 09:11:50.986232 containerd[1473]: time="2024-12-13T09:11:50.985505877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:50.995198 containerd[1473]: time="2024-12-13T09:11:50.995114746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Dec 13 09:11:50.997209 containerd[1473]: time="2024-12-13T09:11:50.997102461Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:51.002837 containerd[1473]: time="2024-12-13T09:11:51.002351838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:51.003690 containerd[1473]: time="2024-12-13T09:11:51.003624524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.897634568s"
Dec 13 09:11:51.003690 containerd[1473]: time="2024-12-13T09:11:51.003691064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Dec 13 09:11:51.012644 containerd[1473]: time="2024-12-13T09:11:51.012584384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Dec 13 09:11:51.061713 containerd[1473]: time="2024-12-13T09:11:51.060559290Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Dec 13 09:11:51.099002 containerd[1473]: time="2024-12-13T09:11:51.098379597Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2\""
Dec 13 09:11:51.105323 containerd[1473]: time="2024-12-13T09:11:51.105244456Z" level=info msg="StartContainer for \"7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2\""
Dec 13 09:11:51.180506 systemd[1]: Started cri-containerd-7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2.scope - libcontainer container 7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2.
Dec 13 09:11:51.247913 containerd[1473]: time="2024-12-13T09:11:51.247703546Z" level=info msg="StartContainer for \"7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2\" returns successfully"
Dec 13 09:11:51.285569 systemd[1]: cri-containerd-7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2.scope: Deactivated successfully.
Dec 13 09:11:51.352067 containerd[1473]: time="2024-12-13T09:11:51.351930185Z" level=info msg="shim disconnected" id=7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2 namespace=k8s.io
Dec 13 09:11:51.352067 containerd[1473]: time="2024-12-13T09:11:51.352048277Z" level=warning msg="cleaning up after shim disconnected" id=7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2 namespace=k8s.io
Dec 13 09:11:51.352067 containerd[1473]: time="2024-12-13T09:11:51.352061924Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 09:11:51.668009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7216d6d074e88a125c360bf7f2f6596d2392a5a06ff5753b6f8ebdddcfd9f2c2-rootfs.mount: Deactivated successfully.
Dec 13 09:11:51.887496 kubelet[2524]: E1213 09:11:51.886773    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:52.705296 kubelet[2524]: E1213 09:11:52.704579    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:11:54.381477 containerd[1473]: time="2024-12-13T09:11:54.379851013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:54.381477 containerd[1473]: time="2024-12-13T09:11:54.381380850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141"
Dec 13 09:11:54.382319 containerd[1473]: time="2024-12-13T09:11:54.382205501Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:54.388916 containerd[1473]: time="2024-12-13T09:11:54.385401381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:11:54.388916 containerd[1473]: time="2024-12-13T09:11:54.386329404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.373687179s"
Dec 13 09:11:54.388916 containerd[1473]: time="2024-12-13T09:11:54.386368656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\""
Dec 13 09:11:54.389484 containerd[1473]: time="2024-12-13T09:11:54.389440248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Dec 13 09:11:54.402072 containerd[1473]: time="2024-12-13T09:11:54.401984293Z" level=info msg="CreateContainer within sandbox \"f26165d20c5aaadee15d44454304f71e2c1ef9604d45e95119c4c03f95460cfe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Dec 13 09:11:54.432778 containerd[1473]: time="2024-12-13T09:11:54.432541971Z" level=info msg="CreateContainer within sandbox \"f26165d20c5aaadee15d44454304f71e2c1ef9604d45e95119c4c03f95460cfe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66d07465056747534766aa47b46afc2639033f88ee4c83da473194f7712eda02\""
Dec 13 09:11:54.435420 containerd[1473]: time="2024-12-13T09:11:54.435349032Z" level=info msg="StartContainer for \"66d07465056747534766aa47b46afc2639033f88ee4c83da473194f7712eda02\""
Dec 13 09:11:54.510367 systemd[1]: Started cri-containerd-66d07465056747534766aa47b46afc2639033f88ee4c83da473194f7712eda02.scope - libcontainer container 66d07465056747534766aa47b46afc2639033f88ee4c83da473194f7712eda02.
Dec 13 09:11:54.598932 containerd[1473]: time="2024-12-13T09:11:54.598432744Z" level=info msg="StartContainer for \"66d07465056747534766aa47b46afc2639033f88ee4c83da473194f7712eda02\" returns successfully"
Dec 13 09:11:54.706728 kubelet[2524]: E1213 09:11:54.704084    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:11:54.922114 kubelet[2524]: E1213 09:11:54.921618    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:54.956161 kubelet[2524]: I1213 09:11:54.956052    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77f6ddb4d6-78zbn" podStartSLOduration=1.6722080369999999 podStartE2EDuration="6.955991496s" podCreationTimestamp="2024-12-13 09:11:48 +0000 UTC" firstStartedPulling="2024-12-13 09:11:49.103995826 +0000 UTC m=+16.555149432" lastFinishedPulling="2024-12-13 09:11:54.387779272 +0000 UTC m=+21.838932891" observedRunningTime="2024-12-13 09:11:54.955753283 +0000 UTC m=+22.406906920" watchObservedRunningTime="2024-12-13 09:11:54.955991496 +0000 UTC m=+22.407145138"
Dec 13 09:11:55.923110 kubelet[2524]: I1213 09:11:55.923054    2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 09:11:55.923683 kubelet[2524]: E1213 09:11:55.923502    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:11:56.708541 kubelet[2524]: E1213 09:11:56.706574    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:11:58.703375 kubelet[2524]: E1213 09:11:58.703226    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:12:00.510569 containerd[1473]: time="2024-12-13T09:12:00.510469987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:00.512067 containerd[1473]: time="2024-12-13T09:12:00.511988223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Dec 13 09:12:00.514896 containerd[1473]: time="2024-12-13T09:12:00.514091394Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:00.516783 containerd[1473]: time="2024-12-13T09:12:00.516734722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:00.519061 containerd[1473]: time="2024-12-13T09:12:00.518979905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.129485341s"
Dec 13 09:12:00.519297 containerd[1473]: time="2024-12-13T09:12:00.519271181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Dec 13 09:12:00.523888 containerd[1473]: time="2024-12-13T09:12:00.523806853Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Dec 13 09:12:00.591309 containerd[1473]: time="2024-12-13T09:12:00.591206298Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d\""
Dec 13 09:12:00.595814 containerd[1473]: time="2024-12-13T09:12:00.594960443Z" level=info msg="StartContainer for \"ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d\""
Dec 13 09:12:00.707537 kubelet[2524]: E1213 09:12:00.707472    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:12:00.820510 systemd[1]: Started cri-containerd-ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d.scope - libcontainer container ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d.
Dec 13 09:12:00.915116 containerd[1473]: time="2024-12-13T09:12:00.914162675Z" level=info msg="StartContainer for \"ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d\" returns successfully"
Dec 13 09:12:00.976748 kubelet[2524]: E1213 09:12:00.976505    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:01.980554 kubelet[2524]: E1213 09:12:01.978315    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:02.707087 kubelet[2524]: E1213 09:12:02.704275    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:12:02.859288 systemd[1]: cri-containerd-ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d.scope: Deactivated successfully.
Dec 13 09:12:02.994733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d-rootfs.mount: Deactivated successfully.
Dec 13 09:12:03.005892 containerd[1473]: time="2024-12-13T09:12:03.005571547Z" level=info msg="shim disconnected" id=ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d namespace=k8s.io
Dec 13 09:12:03.005892 containerd[1473]: time="2024-12-13T09:12:03.005678001Z" level=warning msg="cleaning up after shim disconnected" id=ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d namespace=k8s.io
Dec 13 09:12:03.005892 containerd[1473]: time="2024-12-13T09:12:03.005691246Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 09:12:03.047926 containerd[1473]: time="2024-12-13T09:12:03.042384434Z" level=error msg="collecting metrics for ce0570274e85966808de93213b133e4eb7cd3ad603f564df0adec010bad96c2d" error="ttrpc: closed: unknown"
Dec 13 09:12:03.078762 kubelet[2524]: I1213 09:12:03.078709    2524 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Dec 13 09:12:03.220305 systemd[1]: Created slice kubepods-burstable-poddf5dd674_9dba_48dd_acc0_496e39d2ef18.slice - libcontainer container kubepods-burstable-poddf5dd674_9dba_48dd_acc0_496e39d2ef18.slice.
Dec 13 09:12:03.243687 systemd[1]: Created slice kubepods-burstable-pod5ca8008c_9c95_43aa_8201_4f1e59b8ea10.slice - libcontainer container kubepods-burstable-pod5ca8008c_9c95_43aa_8201_4f1e59b8ea10.slice.
Dec 13 09:12:03.259212 kubelet[2524]: I1213 09:12:03.258819    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ca8008c-9c95-43aa-8201-4f1e59b8ea10-config-volume\") pod \"coredns-6f6b679f8f-sjqdz\" (UID: \"5ca8008c-9c95-43aa-8201-4f1e59b8ea10\") " pod="kube-system/coredns-6f6b679f8f-sjqdz"
Dec 13 09:12:03.259212 kubelet[2524]: I1213 09:12:03.258888    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82w6\" (UniqueName: \"kubernetes.io/projected/5ca8008c-9c95-43aa-8201-4f1e59b8ea10-kube-api-access-h82w6\") pod \"coredns-6f6b679f8f-sjqdz\" (UID: \"5ca8008c-9c95-43aa-8201-4f1e59b8ea10\") " pod="kube-system/coredns-6f6b679f8f-sjqdz"
Dec 13 09:12:03.259212 kubelet[2524]: I1213 09:12:03.258923    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdh6t\" (UniqueName: \"kubernetes.io/projected/df5dd674-9dba-48dd-acc0-496e39d2ef18-kube-api-access-qdh6t\") pod \"coredns-6f6b679f8f-hpjnn\" (UID: \"df5dd674-9dba-48dd-acc0-496e39d2ef18\") " pod="kube-system/coredns-6f6b679f8f-hpjnn"
Dec 13 09:12:03.259212 kubelet[2524]: I1213 09:12:03.258956    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5dd674-9dba-48dd-acc0-496e39d2ef18-config-volume\") pod \"coredns-6f6b679f8f-hpjnn\" (UID: \"df5dd674-9dba-48dd-acc0-496e39d2ef18\") " pod="kube-system/coredns-6f6b679f8f-hpjnn"
Dec 13 09:12:03.259212 kubelet[2524]: I1213 09:12:03.258984    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88388b59-0e34-45d0-b4c1-5da85c561522-tigera-ca-bundle\") pod \"calico-kube-controllers-55499f54f6-hpbbq\" (UID: \"88388b59-0e34-45d0-b4c1-5da85c561522\") " pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq"
Dec 13 09:12:03.259632 kubelet[2524]: I1213 09:12:03.259013    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzhd\" (UniqueName: \"kubernetes.io/projected/88388b59-0e34-45d0-b4c1-5da85c561522-kube-api-access-9mzhd\") pod \"calico-kube-controllers-55499f54f6-hpbbq\" (UID: \"88388b59-0e34-45d0-b4c1-5da85c561522\") " pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq"
Dec 13 09:12:03.279224 systemd[1]: Created slice kubepods-besteffort-pod88388b59_0e34_45d0_b4c1_5da85c561522.slice - libcontainer container kubepods-besteffort-pod88388b59_0e34_45d0_b4c1_5da85c561522.slice.
Dec 13 09:12:03.317419 systemd[1]: Created slice kubepods-besteffort-pod94ad6c01_b7b0_4277_bf3f_b065b6556e24.slice - libcontainer container kubepods-besteffort-pod94ad6c01_b7b0_4277_bf3f_b065b6556e24.slice.
Dec 13 09:12:03.361962 kubelet[2524]: I1213 09:12:03.359436    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94ad6c01-b7b0-4277-bf3f-b065b6556e24-calico-apiserver-certs\") pod \"calico-apiserver-7c78887f5b-pnmjx\" (UID: \"94ad6c01-b7b0-4277-bf3f-b065b6556e24\") " pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx"
Dec 13 09:12:03.361962 kubelet[2524]: I1213 09:12:03.359511    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn56m\" (UniqueName: \"kubernetes.io/projected/94ad6c01-b7b0-4277-bf3f-b065b6556e24-kube-api-access-fn56m\") pod \"calico-apiserver-7c78887f5b-pnmjx\" (UID: \"94ad6c01-b7b0-4277-bf3f-b065b6556e24\") " pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx"
Dec 13 09:12:03.361962 kubelet[2524]: I1213 09:12:03.359597    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvq5v\" (UniqueName: \"kubernetes.io/projected/06a2df62-2b3b-4b64-8737-3c196ad7319a-kube-api-access-nvq5v\") pod \"calico-apiserver-7c78887f5b-2s422\" (UID: \"06a2df62-2b3b-4b64-8737-3c196ad7319a\") " pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422"
Dec 13 09:12:03.361962 kubelet[2524]: I1213 09:12:03.359626    2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06a2df62-2b3b-4b64-8737-3c196ad7319a-calico-apiserver-certs\") pod \"calico-apiserver-7c78887f5b-2s422\" (UID: \"06a2df62-2b3b-4b64-8737-3c196ad7319a\") " pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422"
Dec 13 09:12:03.361360 systemd[1]: Created slice kubepods-besteffort-pod06a2df62_2b3b_4b64_8737_3c196ad7319a.slice - libcontainer container kubepods-besteffort-pod06a2df62_2b3b_4b64_8737_3c196ad7319a.slice.
Dec 13 09:12:03.538466 kubelet[2524]: E1213 09:12:03.537696    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:03.548339 containerd[1473]: time="2024-12-13T09:12:03.545713483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpjnn,Uid:df5dd674-9dba-48dd-acc0-496e39d2ef18,Namespace:kube-system,Attempt:0,}"
Dec 13 09:12:03.570420 kubelet[2524]: E1213 09:12:03.565509    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:03.592806 containerd[1473]: time="2024-12-13T09:12:03.592664863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55499f54f6-hpbbq,Uid:88388b59-0e34-45d0-b4c1-5da85c561522,Namespace:calico-system,Attempt:0,}"
Dec 13 09:12:03.598451 systemd[1]: Started sshd@9-165.232.145.99:22-14.63.196.175:60622.service - OpenSSH per-connection server daemon (14.63.196.175:60622).
Dec 13 09:12:03.650903 containerd[1473]: time="2024-12-13T09:12:03.650829286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-pnmjx,Uid:94ad6c01-b7b0-4277-bf3f-b065b6556e24,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 09:12:03.651625 containerd[1473]: time="2024-12-13T09:12:03.651530602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sjqdz,Uid:5ca8008c-9c95-43aa-8201-4f1e59b8ea10,Namespace:kube-system,Attempt:0,}"
Dec 13 09:12:03.703771 containerd[1473]: time="2024-12-13T09:12:03.703232136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-2s422,Uid:06a2df62-2b3b-4b64-8737-3c196ad7319a,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 09:12:04.073558 kubelet[2524]: E1213 09:12:04.067222    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:04.102604 containerd[1473]: time="2024-12-13T09:12:04.100835644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Dec 13 09:12:04.404506 containerd[1473]: time="2024-12-13T09:12:04.404209958Z" level=error msg="Failed to destroy network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.418734 containerd[1473]: time="2024-12-13T09:12:04.406806843Z" level=error msg="encountered an error cleaning up failed sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.418734 containerd[1473]: time="2024-12-13T09:12:04.411977432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-pnmjx,Uid:94ad6c01-b7b0-4277-bf3f-b065b6556e24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.423980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395-shm.mount: Deactivated successfully.
Dec 13 09:12:04.434279 containerd[1473]: time="2024-12-13T09:12:04.429349322Z" level=error msg="Failed to destroy network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.439168 containerd[1473]: time="2024-12-13T09:12:04.439063047Z" level=error msg="encountered an error cleaning up failed sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.439524 containerd[1473]: time="2024-12-13T09:12:04.439473990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sjqdz,Uid:5ca8008c-9c95-43aa-8201-4f1e59b8ea10,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.439781 containerd[1473]: time="2024-12-13T09:12:04.439736789Z" level=error msg="Failed to destroy network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.442109 containerd[1473]: time="2024-12-13T09:12:04.441734836Z" level=error msg="encountered an error cleaning up failed sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.442109 containerd[1473]: time="2024-12-13T09:12:04.441870860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-2s422,Uid:06a2df62-2b3b-4b64-8737-3c196ad7319a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.442279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a-shm.mount: Deactivated successfully.
Dec 13 09:12:04.445833 containerd[1473]: time="2024-12-13T09:12:04.445189949Z" level=error msg="Failed to destroy network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.447563 containerd[1473]: time="2024-12-13T09:12:04.446413523Z" level=error msg="encountered an error cleaning up failed sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.447563 containerd[1473]: time="2024-12-13T09:12:04.446527905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpjnn,Uid:df5dd674-9dba-48dd-acc0-496e39d2ef18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.451402 kubelet[2524]: E1213 09:12:04.451329    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.452698 kubelet[2524]: E1213 09:12:04.451447    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hpjnn"
Dec 13 09:12:04.452698 kubelet[2524]: E1213 09:12:04.451489    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hpjnn"
Dec 13 09:12:04.452698 kubelet[2524]: E1213 09:12:04.451557    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hpjnn_kube-system(df5dd674-9dba-48dd-acc0-496e39d2ef18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hpjnn_kube-system(df5dd674-9dba-48dd-acc0-496e39d2ef18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hpjnn" podUID="df5dd674-9dba-48dd-acc0-496e39d2ef18"
Dec 13 09:12:04.460474 kubelet[2524]: E1213 09:12:04.455598    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.460474 kubelet[2524]: E1213 09:12:04.455701    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sjqdz"
Dec 13 09:12:04.460474 kubelet[2524]: E1213 09:12:04.455737    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sjqdz"
Dec 13 09:12:04.459616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662-shm.mount: Deactivated successfully.
Dec 13 09:12:04.460890 kubelet[2524]: E1213 09:12:04.455807    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sjqdz_kube-system(5ca8008c-9c95-43aa-8201-4f1e59b8ea10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sjqdz_kube-system(5ca8008c-9c95-43aa-8201-4f1e59b8ea10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sjqdz" podUID="5ca8008c-9c95-43aa-8201-4f1e59b8ea10"
Dec 13 09:12:04.460890 kubelet[2524]: E1213 09:12:04.455889    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.460890 kubelet[2524]: E1213 09:12:04.455917    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx"
Dec 13 09:12:04.459896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0-shm.mount: Deactivated successfully.
Dec 13 09:12:04.465091 kubelet[2524]: E1213 09:12:04.455942    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx"
Dec 13 09:12:04.465091 kubelet[2524]: E1213 09:12:04.455976    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c78887f5b-pnmjx_calico-apiserver(94ad6c01-b7b0-4277-bf3f-b065b6556e24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c78887f5b-pnmjx_calico-apiserver(94ad6c01-b7b0-4277-bf3f-b065b6556e24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx" podUID="94ad6c01-b7b0-4277-bf3f-b065b6556e24"
Dec 13 09:12:04.465091 kubelet[2524]: E1213 09:12:04.456044    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.465358 kubelet[2524]: E1213 09:12:04.456071    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422"
Dec 13 09:12:04.465358 kubelet[2524]: E1213 09:12:04.456093    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422"
Dec 13 09:12:04.465358 kubelet[2524]: E1213 09:12:04.456130    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c78887f5b-2s422_calico-apiserver(06a2df62-2b3b-4b64-8737-3c196ad7319a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c78887f5b-2s422_calico-apiserver(06a2df62-2b3b-4b64-8737-3c196ad7319a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422" podUID="06a2df62-2b3b-4b64-8737-3c196ad7319a"
Dec 13 09:12:04.476217 containerd[1473]: time="2024-12-13T09:12:04.474742746Z" level=error msg="Failed to destroy network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.476217 containerd[1473]: time="2024-12-13T09:12:04.475864637Z" level=error msg="encountered an error cleaning up failed sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.476217 containerd[1473]: time="2024-12-13T09:12:04.475980915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55499f54f6-hpbbq,Uid:88388b59-0e34-45d0-b4c1-5da85c561522,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.477230 kubelet[2524]: E1213 09:12:04.476951    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.477230 kubelet[2524]: E1213 09:12:04.477067    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq"
Dec 13 09:12:04.477230 kubelet[2524]: E1213 09:12:04.477098    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq"
Dec 13 09:12:04.481273 kubelet[2524]: E1213 09:12:04.477153    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55499f54f6-hpbbq_calico-system(88388b59-0e34-45d0-b4c1-5da85c561522)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55499f54f6-hpbbq_calico-system(88388b59-0e34-45d0-b4c1-5da85c561522)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq" podUID="88388b59-0e34-45d0-b4c1-5da85c561522"
Dec 13 09:12:04.491634 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b-shm.mount: Deactivated successfully.
Dec 13 09:12:04.494369 sshd[3261]: Invalid user newftpuser from 14.63.196.175 port 60622
Dec 13 09:12:04.649397 sshd[3261]: Received disconnect from 14.63.196.175 port 60622:11: Bye Bye [preauth]
Dec 13 09:12:04.649397 sshd[3261]: Disconnected from invalid user newftpuser 14.63.196.175 port 60622 [preauth]
Dec 13 09:12:04.651900 systemd[1]: sshd@9-165.232.145.99:22-14.63.196.175:60622.service: Deactivated successfully.
Dec 13 09:12:04.738305 systemd[1]: Created slice kubepods-besteffort-pod0399f05a_b42a_4620_afd9_27c69d03846d.slice - libcontainer container kubepods-besteffort-pod0399f05a_b42a_4620_afd9_27c69d03846d.slice.
Dec 13 09:12:04.751039 containerd[1473]: time="2024-12-13T09:12:04.750951240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbgp8,Uid:0399f05a-b42a-4620-afd9-27c69d03846d,Namespace:calico-system,Attempt:0,}"
Dec 13 09:12:04.921335 containerd[1473]: time="2024-12-13T09:12:04.921232027Z" level=error msg="Failed to destroy network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.931323 containerd[1473]: time="2024-12-13T09:12:04.929351692Z" level=error msg="encountered an error cleaning up failed sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.931323 containerd[1473]: time="2024-12-13T09:12:04.929475542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbgp8,Uid:0399f05a-b42a-4620-afd9-27c69d03846d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.931446 kubelet[2524]: E1213 09:12:04.930167    2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:04.931446 kubelet[2524]: E1213 09:12:04.930267    2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:12:04.931446 kubelet[2524]: E1213 09:12:04.930304    2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bbgp8"
Dec 13 09:12:04.949739 kubelet[2524]: E1213 09:12:04.946605    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bbgp8_calico-system(0399f05a-b42a-4620-afd9-27c69d03846d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bbgp8_calico-system(0399f05a-b42a-4620-afd9-27c69d03846d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:12:05.148849 kubelet[2524]: I1213 09:12:05.134318    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:05.156496 containerd[1473]: time="2024-12-13T09:12:05.155313581Z" level=info msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\""
Dec 13 09:12:05.158709 kubelet[2524]: I1213 09:12:05.158662    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:05.163065 containerd[1473]: time="2024-12-13T09:12:05.162123320Z" level=info msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\""
Dec 13 09:12:05.163947 containerd[1473]: time="2024-12-13T09:12:05.163577156Z" level=info msg="Ensure that sandbox ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0 in task-service has been cleanup successfully"
Dec 13 09:12:05.168209 containerd[1473]: time="2024-12-13T09:12:05.164237373Z" level=info msg="Ensure that sandbox f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662 in task-service has been cleanup successfully"
Dec 13 09:12:05.199333 kubelet[2524]: I1213 09:12:05.199282    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:05.202707 containerd[1473]: time="2024-12-13T09:12:05.202127686Z" level=info msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\""
Dec 13 09:12:05.202707 containerd[1473]: time="2024-12-13T09:12:05.202414274Z" level=info msg="Ensure that sandbox 6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b in task-service has been cleanup successfully"
Dec 13 09:12:05.205865 kubelet[2524]: I1213 09:12:05.205825    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:05.208837 containerd[1473]: time="2024-12-13T09:12:05.208277578Z" level=info msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\""
Dec 13 09:12:05.208837 containerd[1473]: time="2024-12-13T09:12:05.208574937Z" level=info msg="Ensure that sandbox d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5 in task-service has been cleanup successfully"
Dec 13 09:12:05.230673 kubelet[2524]: I1213 09:12:05.230618    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:05.233522 containerd[1473]: time="2024-12-13T09:12:05.232863948Z" level=info msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\""
Dec 13 09:12:05.233522 containerd[1473]: time="2024-12-13T09:12:05.233154217Z" level=info msg="Ensure that sandbox 6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a in task-service has been cleanup successfully"
Dec 13 09:12:05.283916 kubelet[2524]: I1213 09:12:05.283869    2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:05.292880 containerd[1473]: time="2024-12-13T09:12:05.292807103Z" level=info msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\""
Dec 13 09:12:05.293658 containerd[1473]: time="2024-12-13T09:12:05.293599432Z" level=info msg="Ensure that sandbox 90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395 in task-service has been cleanup successfully"
Dec 13 09:12:05.447272 containerd[1473]: time="2024-12-13T09:12:05.446959973Z" level=error msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" failed" error="failed to destroy network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.451602 kubelet[2524]: E1213 09:12:05.447477    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:05.451602 kubelet[2524]: E1213 09:12:05.447554    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"}
Dec 13 09:12:05.451602 kubelet[2524]: E1213 09:12:05.447701    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06a2df62-2b3b-4b64-8737-3c196ad7319a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.451602 kubelet[2524]: E1213 09:12:05.447740    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06a2df62-2b3b-4b64-8737-3c196ad7319a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422" podUID="06a2df62-2b3b-4b64-8737-3c196ad7319a"
Dec 13 09:12:05.458850 containerd[1473]: time="2024-12-13T09:12:05.458747527Z" level=error msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" failed" error="failed to destroy network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.460733 kubelet[2524]: E1213 09:12:05.459873    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:05.460733 kubelet[2524]: E1213 09:12:05.459962    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"}
Dec 13 09:12:05.460733 kubelet[2524]: E1213 09:12:05.460523    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df5dd674-9dba-48dd-acc0-496e39d2ef18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.460733 kubelet[2524]: E1213 09:12:05.460628    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df5dd674-9dba-48dd-acc0-496e39d2ef18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hpjnn" podUID="df5dd674-9dba-48dd-acc0-496e39d2ef18"
Dec 13 09:12:05.497663 containerd[1473]: time="2024-12-13T09:12:05.497264355Z" level=error msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" failed" error="failed to destroy network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.498839 kubelet[2524]: E1213 09:12:05.498410    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:05.498839 kubelet[2524]: E1213 09:12:05.498491    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"}
Dec 13 09:12:05.498839 kubelet[2524]: E1213 09:12:05.498545    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0399f05a-b42a-4620-afd9-27c69d03846d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.498839 kubelet[2524]: E1213 09:12:05.498584    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0399f05a-b42a-4620-afd9-27c69d03846d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bbgp8" podUID="0399f05a-b42a-4620-afd9-27c69d03846d"
Dec 13 09:12:05.502159 containerd[1473]: time="2024-12-13T09:12:05.499401273Z" level=error msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" failed" error="failed to destroy network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.505064 kubelet[2524]: E1213 09:12:05.502915    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:05.505064 kubelet[2524]: E1213 09:12:05.503013    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"}
Dec 13 09:12:05.505064 kubelet[2524]: E1213 09:12:05.503123    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94ad6c01-b7b0-4277-bf3f-b065b6556e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.505064 kubelet[2524]: E1213 09:12:05.503162    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94ad6c01-b7b0-4277-bf3f-b065b6556e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx" podUID="94ad6c01-b7b0-4277-bf3f-b065b6556e24"
Dec 13 09:12:05.512750 containerd[1473]: time="2024-12-13T09:12:05.512402873Z" level=error msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" failed" error="failed to destroy network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.513345 kubelet[2524]: E1213 09:12:05.513286    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:05.514058 containerd[1473]: time="2024-12-13T09:12:05.513559356Z" level=error msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" failed" error="failed to destroy network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 09:12:05.514147 kubelet[2524]: E1213 09:12:05.513730    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"}
Dec 13 09:12:05.514147 kubelet[2524]: E1213 09:12:05.513794    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ca8008c-9c95-43aa-8201-4f1e59b8ea10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.514147 kubelet[2524]: E1213 09:12:05.513831    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ca8008c-9c95-43aa-8201-4f1e59b8ea10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sjqdz" podUID="5ca8008c-9c95-43aa-8201-4f1e59b8ea10"
Dec 13 09:12:05.514147 kubelet[2524]: E1213 09:12:05.513929    2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:05.514147 kubelet[2524]: E1213 09:12:05.513955    2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"}
Dec 13 09:12:05.515079 kubelet[2524]: E1213 09:12:05.513983    2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88388b59-0e34-45d0-b4c1-5da85c561522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 09:12:05.515079 kubelet[2524]: E1213 09:12:05.514007    2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88388b59-0e34-45d0-b4c1-5da85c561522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq" podUID="88388b59-0e34-45d0-b4c1-5da85c561522"
Dec 13 09:12:13.941510 systemd[1]: Started sshd@10-165.232.145.99:22-147.75.109.163:48086.service - OpenSSH per-connection server daemon (147.75.109.163:48086).
Dec 13 09:12:14.192686 sshd[3561]: Accepted publickey for core from 147.75.109.163 port 48086 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:14.196202 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:14.217376 systemd-logind[1450]: New session 8 of user core.
Dec 13 09:12:14.221790 systemd[1]: Started session-8.scope - Session 8 of User core.
Dec 13 09:12:14.517955 sshd[3561]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:14.527339 systemd[1]: sshd@10-165.232.145.99:22-147.75.109.163:48086.service: Deactivated successfully.
Dec 13 09:12:14.533862 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 09:12:14.535345 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit.
Dec 13 09:12:14.537954 systemd-logind[1450]: Removed session 8.
Dec 13 09:12:14.665823 kubelet[2524]: I1213 09:12:14.665751    2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 09:12:14.679095 kubelet[2524]: E1213 09:12:14.678298    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:14.987931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171723809.mount: Deactivated successfully.
Dec 13 09:12:15.131181 containerd[1473]: time="2024-12-13T09:12:15.111922750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Dec 13 09:12:15.149356 containerd[1473]: time="2024-12-13T09:12:15.149210655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:15.173677 containerd[1473]: time="2024-12-13T09:12:15.172230325Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:15.196135 containerd[1473]: time="2024-12-13T09:12:15.195990068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:15.201218 containerd[1473]: time="2024-12-13T09:12:15.200984227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.093857642s"
Dec 13 09:12:15.201218 containerd[1473]: time="2024-12-13T09:12:15.201075408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Dec 13 09:12:15.268527 containerd[1473]: time="2024-12-13T09:12:15.268316425Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Dec 13 09:12:15.358475 kubelet[2524]: E1213 09:12:15.358433    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:15.371092 containerd[1473]: time="2024-12-13T09:12:15.370955337Z" level=info msg="CreateContainer within sandbox \"5e2f6a5c37ccfbdb15a140a9a01870bb8835f17b65530d9acc850130982c2638\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"86df5523985f71007ba24c6c63b0d8dae953cdd8f49e1d7f531dac22ad32e6b7\""
Dec 13 09:12:15.377853 containerd[1473]: time="2024-12-13T09:12:15.377727858Z" level=info msg="StartContainer for \"86df5523985f71007ba24c6c63b0d8dae953cdd8f49e1d7f531dac22ad32e6b7\""
Dec 13 09:12:15.433369 systemd[1]: Started cri-containerd-86df5523985f71007ba24c6c63b0d8dae953cdd8f49e1d7f531dac22ad32e6b7.scope - libcontainer container 86df5523985f71007ba24c6c63b0d8dae953cdd8f49e1d7f531dac22ad32e6b7.
Dec 13 09:12:15.498131 containerd[1473]: time="2024-12-13T09:12:15.498073728Z" level=info msg="StartContainer for \"86df5523985f71007ba24c6c63b0d8dae953cdd8f49e1d7f531dac22ad32e6b7\" returns successfully"
Dec 13 09:12:15.612650 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Dec 13 09:12:15.614007 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Dec 13 09:12:16.359399 kubelet[2524]: E1213 09:12:16.359334    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:16.415644 kubelet[2524]: I1213 09:12:16.404274    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gw94v" podStartSLOduration=2.305214721 podStartE2EDuration="28.404239909s" podCreationTimestamp="2024-12-13 09:11:48 +0000 UTC" firstStartedPulling="2024-12-13 09:11:49.103242734 +0000 UTC m=+16.554396384" lastFinishedPulling="2024-12-13 09:12:15.202267964 +0000 UTC m=+42.653421572" observedRunningTime="2024-12-13 09:12:16.400943615 +0000 UTC m=+43.852097255" watchObservedRunningTime="2024-12-13 09:12:16.404239909 +0000 UTC m=+43.855393537"
Dec 13 09:12:16.705746 containerd[1473]: time="2024-12-13T09:12:16.704977819Z" level=info msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\""
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.804 [INFO][3679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.805 [INFO][3679] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" iface="eth0" netns="/var/run/netns/cni-b8e32e5f-b7c3-fc8e-9155-7580d06066f6"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.806 [INFO][3679] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" iface="eth0" netns="/var/run/netns/cni-b8e32e5f-b7c3-fc8e-9155-7580d06066f6"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.810 [INFO][3679] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" iface="eth0" netns="/var/run/netns/cni-b8e32e5f-b7c3-fc8e-9155-7580d06066f6"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.810 [INFO][3679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.810 [INFO][3679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.933 [INFO][3685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.936 [INFO][3685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.936 [INFO][3685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.950 [WARNING][3685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.951 [INFO][3685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.954 [INFO][3685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:16.960221 containerd[1473]: 2024-12-13 09:12:16.957 [INFO][3679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:16.964136 containerd[1473]: time="2024-12-13T09:12:16.964044511Z" level=info msg="TearDown network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" successfully"
Dec 13 09:12:16.964136 containerd[1473]: time="2024-12-13T09:12:16.964104306Z" level=info msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" returns successfully"
Dec 13 09:12:16.969539 systemd[1]: run-netns-cni\x2db8e32e5f\x2db7c3\x2dfc8e\x2d9155\x2d7580d06066f6.mount: Deactivated successfully.
Dec 13 09:12:16.978612 containerd[1473]: time="2024-12-13T09:12:16.978012473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-2s422,Uid:06a2df62-2b3b-4b64-8737-3c196ad7319a,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 09:12:17.379654 systemd-networkd[1371]: calicfd3aeebda3: Link UP
Dec 13 09:12:17.394275 kubelet[2524]: E1213 09:12:17.385222    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:17.385923 systemd-networkd[1371]: calicfd3aeebda3: Gained carrier
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.079 [INFO][3693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.097 [INFO][3693] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0 calico-apiserver-7c78887f5b- calico-apiserver  06a2df62-2b3b-4b64-8737-3c196ad7319a 857 0 2024-12-13 09:11:48 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c78887f5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  calico-apiserver-7c78887f5b-2s422 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfd3aeebda3  [] []}} ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.098 [INFO][3693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.159 [INFO][3704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" HandleID="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.177 [INFO][3704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" HandleID="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-7-516c4b3017", "pod":"calico-apiserver-7c78887f5b-2s422", "timestamp":"2024-12-13 09:12:17.159256776 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.177 [INFO][3704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.177 [INFO][3704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.177 [INFO][3704] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.182 [INFO][3704] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.276 [INFO][3704] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.289 [INFO][3704] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.292 [INFO][3704] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.297 [INFO][3704] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.298 [INFO][3704] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.301 [INFO][3704] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.319 [INFO][3704] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.332 [INFO][3704] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.65/26] block=192.168.93.64/26 handle="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.332 [INFO][3704] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.65/26] handle="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.332 [INFO][3704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:17.428514 containerd[1473]: 2024-12-13 09:12:17.333 [INFO][3704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.65/26] IPv6=[] ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" HandleID="k8s-pod-network.524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.337 [INFO][3693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a2df62-2b3b-4b64-8737-3c196ad7319a", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"calico-apiserver-7c78887f5b-2s422", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfd3aeebda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.337 [INFO][3693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.65/32] ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.337 [INFO][3693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfd3aeebda3 ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.365 [INFO][3693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.366 [INFO][3693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a2df62-2b3b-4b64-8737-3c196ad7319a", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175", Pod:"calico-apiserver-7c78887f5b-2s422", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfd3aeebda3", MAC:"1e:23:e5:0f:68:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:17.430302 containerd[1473]: 2024-12-13 09:12:17.406 [INFO][3693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-2s422" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:17.622497 containerd[1473]: time="2024-12-13T09:12:17.620620549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:17.632091 containerd[1473]: time="2024-12-13T09:12:17.628121330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:17.632091 containerd[1473]: time="2024-12-13T09:12:17.628184085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:17.632091 containerd[1473]: time="2024-12-13T09:12:17.628416925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:17.706258 containerd[1473]: time="2024-12-13T09:12:17.705724124Z" level=info msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\""
Dec 13 09:12:17.728375 systemd[1]: Started cri-containerd-524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175.scope - libcontainer container 524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175.
Dec 13 09:12:17.954191 containerd[1473]: time="2024-12-13T09:12:17.953264599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-2s422,Uid:06a2df62-2b3b-4b64-8737-3c196ad7319a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175\""
Dec 13 09:12:17.975018 containerd[1473]: time="2024-12-13T09:12:17.974637356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.859 [INFO][3865] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.860 [INFO][3865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" iface="eth0" netns="/var/run/netns/cni-088caf4e-729a-9653-0e85-0f854a2f6743"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.861 [INFO][3865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" iface="eth0" netns="/var/run/netns/cni-088caf4e-729a-9653-0e85-0f854a2f6743"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.861 [INFO][3865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" iface="eth0" netns="/var/run/netns/cni-088caf4e-729a-9653-0e85-0f854a2f6743"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.861 [INFO][3865] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.861 [INFO][3865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.952 [INFO][3881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.960 [INFO][3881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.961 [INFO][3881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.986 [WARNING][3881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.986 [INFO][3881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.992 [INFO][3881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:18.005618 containerd[1473]: 2024-12-13 09:12:17.999 [INFO][3865] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:18.011538 containerd[1473]: time="2024-12-13T09:12:18.011235782Z" level=info msg="TearDown network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" successfully"
Dec 13 09:12:18.011538 containerd[1473]: time="2024-12-13T09:12:18.011327616Z" level=info msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" returns successfully"
Dec 13 09:12:18.013270 kubelet[2524]: E1213 09:12:18.012292    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:18.015940 containerd[1473]: time="2024-12-13T09:12:18.015440117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpjnn,Uid:df5dd674-9dba-48dd-acc0-496e39d2ef18,Namespace:kube-system,Attempt:1,}"
Dec 13 09:12:18.018401 systemd[1]: run-netns-cni\x2d088caf4e\x2d729a\x2d9653\x2d0e85\x2d0f854a2f6743.mount: Deactivated successfully.
Dec 13 09:12:18.413994 systemd-networkd[1371]: cali3a197a967d1: Link UP
Dec 13 09:12:18.416599 systemd-networkd[1371]: cali3a197a967d1: Gained carrier
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.172 [INFO][3896] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.213 [INFO][3896] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0 coredns-6f6b679f8f- kube-system  df5dd674-9dba-48dd-acc0-496e39d2ef18 868 0 2024-12-13 09:11:38 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  coredns-6f6b679f8f-hpjnn eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali3a197a967d1  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.213 [INFO][3896] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.306 [INFO][3913] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" HandleID="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.322 [INFO][3913] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" HandleID="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003104d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-7-516c4b3017", "pod":"coredns-6f6b679f8f-hpjnn", "timestamp":"2024-12-13 09:12:18.306670103 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.323 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.323 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.323 [INFO][3913] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.328 [INFO][3913] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.337 [INFO][3913] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.349 [INFO][3913] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.352 [INFO][3913] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.357 [INFO][3913] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.357 [INFO][3913] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.361 [INFO][3913] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.369 [INFO][3913] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.394 [INFO][3913] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.66/26] block=192.168.93.64/26 handle="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.394 [INFO][3913] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.66/26] handle="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.394 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:18.449385 containerd[1473]: 2024-12-13 09:12:18.394 [INFO][3913] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.66/26] IPv6=[] ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" HandleID="k8s-pod-network.574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.401 [INFO][3896] cni-plugin/k8s.go 386: Populated endpoint ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df5dd674-9dba-48dd-acc0-496e39d2ef18", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"coredns-6f6b679f8f-hpjnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a197a967d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.402 [INFO][3896] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.66/32] ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.402 [INFO][3896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a197a967d1 ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.416 [INFO][3896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.418 [INFO][3896] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df5dd674-9dba-48dd-acc0-496e39d2ef18", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1", Pod:"coredns-6f6b679f8f-hpjnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a197a967d1", MAC:"2e:1c:1a:d4:51:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:18.451761 containerd[1473]: 2024-12-13 09:12:18.443 [INFO][3896] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpjnn" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:18.511797 containerd[1473]: time="2024-12-13T09:12:18.511257818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:18.512018 containerd[1473]: time="2024-12-13T09:12:18.511776135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:18.512018 containerd[1473]: time="2024-12-13T09:12:18.511943129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:18.514183 containerd[1473]: time="2024-12-13T09:12:18.512532583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:18.553071 kernel: bpftool[3983]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Dec 13 09:12:18.583401 systemd[1]: Started cri-containerd-574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1.scope - libcontainer container 574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1.
Dec 13 09:12:18.667289 containerd[1473]: time="2024-12-13T09:12:18.666483335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpjnn,Uid:df5dd674-9dba-48dd-acc0-496e39d2ef18,Namespace:kube-system,Attempt:1,} returns sandbox id \"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1\""
Dec 13 09:12:18.670520 kubelet[2524]: E1213 09:12:18.670459    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:18.676823 containerd[1473]: time="2024-12-13T09:12:18.676757702Z" level=info msg="CreateContainer within sandbox \"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 09:12:18.713995 containerd[1473]: time="2024-12-13T09:12:18.711353592Z" level=info msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\""
Dec 13 09:12:18.791617 containerd[1473]: time="2024-12-13T09:12:18.788257800Z" level=info msg="CreateContainer within sandbox \"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4588355309af04a0685719ff66ef38b46da70a5f42c9397d69bfacd242ec7603\""
Dec 13 09:12:18.791617 containerd[1473]: time="2024-12-13T09:12:18.790344005Z" level=info msg="StartContainer for \"4588355309af04a0685719ff66ef38b46da70a5f42c9397d69bfacd242ec7603\""
Dec 13 09:12:18.896554 systemd[1]: Started cri-containerd-4588355309af04a0685719ff66ef38b46da70a5f42c9397d69bfacd242ec7603.scope - libcontainer container 4588355309af04a0685719ff66ef38b46da70a5f42c9397d69bfacd242ec7603.
Dec 13 09:12:19.006863 containerd[1473]: time="2024-12-13T09:12:19.005415432Z" level=info msg="StartContainer for \"4588355309af04a0685719ff66ef38b46da70a5f42c9397d69bfacd242ec7603\" returns successfully"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.936 [INFO][4011] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.937 [INFO][4011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" iface="eth0" netns="/var/run/netns/cni-d03795fc-a3c5-adfc-2eb8-83d8fe84fb50"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.937 [INFO][4011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" iface="eth0" netns="/var/run/netns/cni-d03795fc-a3c5-adfc-2eb8-83d8fe84fb50"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.938 [INFO][4011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" iface="eth0" netns="/var/run/netns/cni-d03795fc-a3c5-adfc-2eb8-83d8fe84fb50"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.938 [INFO][4011] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:18.938 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.019 [INFO][4042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.020 [INFO][4042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.020 [INFO][4042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.031 [WARNING][4042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.032 [INFO][4042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.037 [INFO][4042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:19.048502 containerd[1473]: 2024-12-13 09:12:19.042 [INFO][4011] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:19.049361 containerd[1473]: time="2024-12-13T09:12:19.048716704Z" level=info msg="TearDown network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" successfully"
Dec 13 09:12:19.052470 containerd[1473]: time="2024-12-13T09:12:19.051131378Z" level=info msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" returns successfully"
Dec 13 09:12:19.052667 containerd[1473]: time="2024-12-13T09:12:19.052511780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbgp8,Uid:0399f05a-b42a-4620-afd9-27c69d03846d,Namespace:calico-system,Attempt:1,}"
Dec 13 09:12:19.055774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117552735.mount: Deactivated successfully.
Dec 13 09:12:19.064097 systemd[1]: run-netns-cni\x2dd03795fc\x2da3c5\x2dadfc\x2d2eb8\x2d83d8fe84fb50.mount: Deactivated successfully.
Dec 13 09:12:19.308842 systemd-networkd[1371]: calicfd3aeebda3: Gained IPv6LL
Dec 13 09:12:19.433203 systemd-networkd[1371]: calif3b52e62af2: Link UP
Dec 13 09:12:19.435018 systemd-networkd[1371]: calif3b52e62af2: Gained carrier
Dec 13 09:12:19.464601 kubelet[2524]: E1213 09:12:19.463794    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.183 [INFO][4058] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0 csi-node-driver- calico-system  0399f05a-b42a-4620-afd9-27c69d03846d 884 0 2024-12-13 09:11:48 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  csi-node-driver-bbgp8 eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] calif3b52e62af2  [] []}} ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.184 [INFO][4058] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.282 [INFO][4069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" HandleID="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.315 [INFO][4069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" HandleID="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003afb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-7-516c4b3017", "pod":"csi-node-driver-bbgp8", "timestamp":"2024-12-13 09:12:19.282357444 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.315 [INFO][4069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.315 [INFO][4069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.316 [INFO][4069] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.323 [INFO][4069] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.344 [INFO][4069] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.363 [INFO][4069] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.369 [INFO][4069] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.376 [INFO][4069] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.377 [INFO][4069] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.381 [INFO][4069] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.391 [INFO][4069] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.407 [INFO][4069] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.67/26] block=192.168.93.64/26 handle="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.407 [INFO][4069] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.67/26] handle="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.407 [INFO][4069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:19.493249 containerd[1473]: 2024-12-13 09:12:19.407 [INFO][4069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.67/26] IPv6=[] ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" HandleID="k8s-pod-network.860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.420 [INFO][4058] cni-plugin/k8s.go 386: Populated endpoint ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0399f05a-b42a-4620-afd9-27c69d03846d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"csi-node-driver-bbgp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b52e62af2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.420 [INFO][4058] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.67/32] ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.420 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b52e62af2 ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.434 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.436 [INFO][4058] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0399f05a-b42a-4620-afd9-27c69d03846d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57", Pod:"csi-node-driver-bbgp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b52e62af2", MAC:"9a:36:f5:68:ac:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:19.496346 containerd[1473]: 2024-12-13 09:12:19.488 [INFO][4058] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57" Namespace="calico-system" Pod="csi-node-driver-bbgp8" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:19.552239 systemd[1]: Started sshd@11-165.232.145.99:22-147.75.109.163:57124.service - OpenSSH per-connection server daemon (147.75.109.163:57124).
Dec 13 09:12:19.595292 kubelet[2524]: I1213 09:12:19.594616    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hpjnn" podStartSLOduration=41.594474015 podStartE2EDuration="41.594474015s" podCreationTimestamp="2024-12-13 09:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:19.540458665 +0000 UTC m=+46.991612294" watchObservedRunningTime="2024-12-13 09:12:19.594474015 +0000 UTC m=+47.045627647"
Dec 13 09:12:19.722624 containerd[1473]: time="2024-12-13T09:12:19.722563156Z" level=info msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\""
Dec 13 09:12:19.760104 containerd[1473]: time="2024-12-13T09:12:19.726728308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:19.760104 containerd[1473]: time="2024-12-13T09:12:19.731319533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:19.760104 containerd[1473]: time="2024-12-13T09:12:19.731383225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:19.760104 containerd[1473]: time="2024-12-13T09:12:19.731801798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:19.765502 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 57124 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:19.772355 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:19.792141 systemd-logind[1450]: New session 9 of user core.
Dec 13 09:12:19.797385 systemd[1]: Started session-9.scope - Session 9 of User core.
Dec 13 09:12:19.814425 systemd-networkd[1371]: cali3a197a967d1: Gained IPv6LL
Dec 13 09:12:19.861109 systemd[1]: Started cri-containerd-860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57.scope - libcontainer container 860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57.
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.035 [INFO][4144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.037 [INFO][4144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" iface="eth0" netns="/var/run/netns/cni-3e491df4-f7f8-a45e-7f67-9370fbb881c1"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.039 [INFO][4144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" iface="eth0" netns="/var/run/netns/cni-3e491df4-f7f8-a45e-7f67-9370fbb881c1"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.039 [INFO][4144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" iface="eth0" netns="/var/run/netns/cni-3e491df4-f7f8-a45e-7f67-9370fbb881c1"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.039 [INFO][4144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.039 [INFO][4144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.221 [INFO][4169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.224 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.224 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.248 [WARNING][4169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.249 [INFO][4169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.253 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:20.274665 containerd[1473]: 2024-12-13 09:12:20.261 [INFO][4144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:20.285219 containerd[1473]: time="2024-12-13T09:12:20.282572521Z" level=info msg="TearDown network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" successfully"
Dec 13 09:12:20.285219 containerd[1473]: time="2024-12-13T09:12:20.282632938Z" level=info msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" returns successfully"
Dec 13 09:12:20.285738 kubelet[2524]: E1213 09:12:20.283253    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:20.286410 systemd[1]: run-netns-cni\x2d3e491df4\x2df7f8\x2da45e\x2d7f67\x2d9370fbb881c1.mount: Deactivated successfully.
Dec 13 09:12:20.291877 containerd[1473]: time="2024-12-13T09:12:20.288556075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sjqdz,Uid:5ca8008c-9c95-43aa-8201-4f1e59b8ea10,Namespace:kube-system,Attempt:1,}"
Dec 13 09:12:20.476089 kubelet[2524]: E1213 09:12:20.472185    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:20.476339 containerd[1473]: time="2024-12-13T09:12:20.474250277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbgp8,Uid:0399f05a-b42a-4620-afd9-27c69d03846d,Namespace:calico-system,Attempt:1,} returns sandbox id \"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57\""
Dec 13 09:12:20.636668 sshd[4096]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:20.697324 systemd[1]: sshd@11-165.232.145.99:22-147.75.109.163:57124.service: Deactivated successfully.
Dec 13 09:12:20.702437 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 09:12:20.710384 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit.
Dec 13 09:12:20.722385 systemd-networkd[1371]: vxlan.calico: Link UP
Dec 13 09:12:20.722398 systemd-networkd[1371]: vxlan.calico: Gained carrier
Dec 13 09:12:20.724878 systemd-logind[1450]: Removed session 9.
Dec 13 09:12:20.730618 containerd[1473]: time="2024-12-13T09:12:20.729905435Z" level=info msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\""
Dec 13 09:12:20.771305 containerd[1473]: time="2024-12-13T09:12:20.771253846Z" level=info msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\""
Dec 13 09:12:20.779778 systemd-networkd[1371]: calif3b52e62af2: Gained IPv6LL
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.035 [INFO][4251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.038 [INFO][4251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" iface="eth0" netns="/var/run/netns/cni-d34d9c8b-6824-d730-5bc6-18e9e4249316"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.039 [INFO][4251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" iface="eth0" netns="/var/run/netns/cni-d34d9c8b-6824-d730-5bc6-18e9e4249316"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.040 [INFO][4251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" iface="eth0" netns="/var/run/netns/cni-d34d9c8b-6824-d730-5bc6-18e9e4249316"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.040 [INFO][4251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.040 [INFO][4251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.130 [INFO][4267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.135 [INFO][4267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.138 [INFO][4267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.184 [WARNING][4267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.184 [INFO][4267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.195 [INFO][4267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:21.224545 containerd[1473]: 2024-12-13 09:12:21.210 [INFO][4251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:21.230719 containerd[1473]: time="2024-12-13T09:12:21.230489458Z" level=info msg="TearDown network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" successfully"
Dec 13 09:12:21.230719 containerd[1473]: time="2024-12-13T09:12:21.230555076Z" level=info msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" returns successfully"
Dec 13 09:12:21.233279 containerd[1473]: time="2024-12-13T09:12:21.233093646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55499f54f6-hpbbq,Uid:88388b59-0e34-45d0-b4c1-5da85c561522,Namespace:calico-system,Attempt:1,}"
Dec 13 09:12:21.235047 systemd[1]: run-netns-cni\x2dd34d9c8b\x2d6824\x2dd730\x2d5bc6\x2d18e9e4249316.mount: Deactivated successfully.
Dec 13 09:12:21.411576 systemd-networkd[1371]: cali22cd407bfdd: Link UP
Dec 13 09:12:21.417174 systemd-networkd[1371]: cali22cd407bfdd: Gained carrier
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.189 [INFO][4249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.189 [INFO][4249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" iface="eth0" netns="/var/run/netns/cni-08c3aab5-f516-c9f0-edb7-2f2d5021238f"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.189 [INFO][4249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" iface="eth0" netns="/var/run/netns/cni-08c3aab5-f516-c9f0-edb7-2f2d5021238f"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.191 [INFO][4249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" iface="eth0" netns="/var/run/netns/cni-08c3aab5-f516-c9f0-edb7-2f2d5021238f"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.191 [INFO][4249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.192 [INFO][4249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.358 [INFO][4279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.359 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.378 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.399 [WARNING][4279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.401 [INFO][4279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.410 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:21.457437 containerd[1473]: 2024-12-13 09:12:21.435 [INFO][4249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:21.460404 containerd[1473]: time="2024-12-13T09:12:21.460243505Z" level=info msg="TearDown network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" successfully"
Dec 13 09:12:21.460404 containerd[1473]: time="2024-12-13T09:12:21.460294706Z" level=info msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" returns successfully"
Dec 13 09:12:21.465369 containerd[1473]: time="2024-12-13T09:12:21.463170434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-pnmjx,Uid:94ad6c01-b7b0-4277-bf3f-b065b6556e24,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 09:12:21.465409 systemd[1]: run-netns-cni\x2d08c3aab5\x2df516\x2dc9f0\x2dedb7\x2d2f2d5021238f.mount: Deactivated successfully.
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:20.686 [INFO][4183] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0 coredns-6f6b679f8f- kube-system  5ca8008c-9c95-43aa-8201-4f1e59b8ea10 906 0 2024-12-13 09:11:38 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  coredns-6f6b679f8f-sjqdz eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali22cd407bfdd  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:20.686 [INFO][4183] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.103 [INFO][4246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" HandleID="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.141 [INFO][4246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" HandleID="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040cb30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-7-516c4b3017", "pod":"coredns-6f6b679f8f-sjqdz", "timestamp":"2024-12-13 09:12:21.103705058 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.141 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.195 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.197 [INFO][4246] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.204 [INFO][4246] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.240 [INFO][4246] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.290 [INFO][4246] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.305 [INFO][4246] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.314 [INFO][4246] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.315 [INFO][4246] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.324 [INFO][4246] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.351 [INFO][4246] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.377 [INFO][4246] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.68/26] block=192.168.93.64/26 handle="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.378 [INFO][4246] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.68/26] handle="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.378 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:21.512657 containerd[1473]: 2024-12-13 09:12:21.378 [INFO][4246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.68/26] IPv6=[] ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" HandleID="k8s-pod-network.4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.392 [INFO][4183] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ca8008c-9c95-43aa-8201-4f1e59b8ea10", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"coredns-6f6b679f8f-sjqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd407bfdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.392 [INFO][4183] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.68/32] ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.392 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22cd407bfdd ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.419 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.429 [INFO][4183] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ca8008c-9c95-43aa-8201-4f1e59b8ea10", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e", Pod:"coredns-6f6b679f8f-sjqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd407bfdd", MAC:"4e:1c:29:85:5b:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:21.518747 containerd[1473]: 2024-12-13 09:12:21.488 [INFO][4183] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e" Namespace="kube-system" Pod="coredns-6f6b679f8f-sjqdz" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:21.541662 kubelet[2524]: E1213 09:12:21.540452    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:21.769796 containerd[1473]: time="2024-12-13T09:12:21.767305159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:21.773125 containerd[1473]: time="2024-12-13T09:12:21.770584283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:21.773125 containerd[1473]: time="2024-12-13T09:12:21.770632918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:21.773125 containerd[1473]: time="2024-12-13T09:12:21.770774736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:21.988295 systemd[1]: Started cri-containerd-4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e.scope - libcontainer container 4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e.
Dec 13 09:12:22.042168 systemd-networkd[1371]: cali168385a15a8: Link UP
Dec 13 09:12:22.062218 systemd-networkd[1371]: cali168385a15a8: Gained carrier
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.647 [INFO][4298] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0 calico-apiserver-7c78887f5b- calico-apiserver  94ad6c01-b7b0-4277-bf3f-b065b6556e24 920 0 2024-12-13 09:11:47 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c78887f5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  calico-apiserver-7c78887f5b-pnmjx eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali168385a15a8  [] []}} ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.648 [INFO][4298] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.809 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" HandleID="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.848 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" HandleID="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000506b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-7-516c4b3017", "pod":"calico-apiserver-7c78887f5b-pnmjx", "timestamp":"2024-12-13 09:12:21.809595701 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.848 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.848 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.848 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.854 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.882 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.908 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.915 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.920 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.920 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.924 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.940 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.961 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.69/26] block=192.168.93.64/26 handle="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.961 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.69/26] handle="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.961 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:22.145316 containerd[1473]: 2024-12-13 09:12:21.961 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.69/26] IPv6=[] ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" HandleID="k8s-pod-network.8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:21.997 [INFO][4298] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ad6c01-b7b0-4277-bf3f-b065b6556e24", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"calico-apiserver-7c78887f5b-pnmjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali168385a15a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:21.997 [INFO][4298] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.69/32] ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:21.997 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali168385a15a8 ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:22.071 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:22.081 [INFO][4298] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ad6c01-b7b0-4277-bf3f-b065b6556e24", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809", Pod:"calico-apiserver-7c78887f5b-pnmjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali168385a15a8", MAC:"6e:dd:41:95:23:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:22.147161 containerd[1473]: 2024-12-13 09:12:22.129 [INFO][4298] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809" Namespace="calico-apiserver" Pod="calico-apiserver-7c78887f5b-pnmjx" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:22.188298 containerd[1473]: time="2024-12-13T09:12:22.186059470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sjqdz,Uid:5ca8008c-9c95-43aa-8201-4f1e59b8ea10,Namespace:kube-system,Attempt:1,} returns sandbox id \"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e\""
Dec 13 09:12:22.193336 kubelet[2524]: E1213 09:12:22.192499    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:22.205426 containerd[1473]: time="2024-12-13T09:12:22.205362597Z" level=info msg="CreateContainer within sandbox \"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 09:12:22.256353 systemd-networkd[1371]: cali454f48b5050: Link UP
Dec 13 09:12:22.264043 systemd-networkd[1371]: cali454f48b5050: Gained carrier
Dec 13 09:12:22.300537 containerd[1473]: time="2024-12-13T09:12:22.298063436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:22.300537 containerd[1473]: time="2024-12-13T09:12:22.298156005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:22.300537 containerd[1473]: time="2024-12-13T09:12:22.298202540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:22.300537 containerd[1473]: time="2024-12-13T09:12:22.298365101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:22.369519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745684925.mount: Deactivated successfully.
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.638 [INFO][4287] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0 calico-kube-controllers-55499f54f6- calico-system  88388b59-0e34-45d0-b4c1-5da85c561522 919 0 2024-12-13 09:11:48 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55499f54f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ci-4081.2.1-7-516c4b3017  calico-kube-controllers-55499f54f6-hpbbq eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali454f48b5050  [] []}} ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.640 [INFO][4287] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.851 [INFO][4320] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" HandleID="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.884 [INFO][4320] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" HandleID="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036cd30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-7-516c4b3017", "pod":"calico-kube-controllers-55499f54f6-hpbbq", "timestamp":"2024-12-13 09:12:21.851734698 +0000 UTC"}, Hostname:"ci-4081.2.1-7-516c4b3017", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.884 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.962 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.967 [INFO][4320] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-7-516c4b3017'
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:21.993 [INFO][4320] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.022 [INFO][4320] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.075 [INFO][4320] ipam/ipam.go 489: Trying affinity for 192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.094 [INFO][4320] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.116 [INFO][4320] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.120 [INFO][4320] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.132 [INFO][4320] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.169 [INFO][4320] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.198 [INFO][4320] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.70/26] block=192.168.93.64/26 handle="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.205 [INFO][4320] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.70/26] handle="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" host="ci-4081.2.1-7-516c4b3017"
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.205 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:22.388153 containerd[1473]: 2024-12-13 09:12:22.206 [INFO][4320] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.70/26] IPv6=[] ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" HandleID="k8s-pod-network.8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.218 [INFO][4287] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0", GenerateName:"calico-kube-controllers-55499f54f6-", Namespace:"calico-system", SelfLink:"", UID:"88388b59-0e34-45d0-b4c1-5da85c561522", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55499f54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"", Pod:"calico-kube-controllers-55499f54f6-hpbbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454f48b5050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.218 [INFO][4287] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.70/32] ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.218 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali454f48b5050 ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.261 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.268 [INFO][4287] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0", GenerateName:"calico-kube-controllers-55499f54f6-", Namespace:"calico-system", SelfLink:"", UID:"88388b59-0e34-45d0-b4c1-5da85c561522", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55499f54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0", Pod:"calico-kube-controllers-55499f54f6-hpbbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454f48b5050", MAC:"be:13:6d:f0:be:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:22.390416 containerd[1473]: 2024-12-13 09:12:22.380 [INFO][4287] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0" Namespace="calico-system" Pod="calico-kube-controllers-55499f54f6-hpbbq" WorkloadEndpoint="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:22.391513 containerd[1473]: time="2024-12-13T09:12:22.391359372Z" level=info msg="CreateContainer within sandbox \"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25320564a935a4bff79503f1b453db3f6107732052f367394e095b65717bd1e8\""
Dec 13 09:12:22.394245 containerd[1473]: time="2024-12-13T09:12:22.394169407Z" level=info msg="StartContainer for \"25320564a935a4bff79503f1b453db3f6107732052f367394e095b65717bd1e8\""
Dec 13 09:12:22.439588 systemd-networkd[1371]: cali22cd407bfdd: Gained IPv6LL
Dec 13 09:12:22.464441 systemd[1]: Started cri-containerd-8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809.scope - libcontainer container 8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809.
Dec 13 09:12:22.511370 systemd[1]: Started cri-containerd-25320564a935a4bff79503f1b453db3f6107732052f367394e095b65717bd1e8.scope - libcontainer container 25320564a935a4bff79503f1b453db3f6107732052f367394e095b65717bd1e8.
Dec 13 09:12:22.569384 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL
Dec 13 09:12:22.653182 containerd[1473]: time="2024-12-13T09:12:22.652589913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 09:12:22.653182 containerd[1473]: time="2024-12-13T09:12:22.652656050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 09:12:22.653182 containerd[1473]: time="2024-12-13T09:12:22.652672611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:22.653182 containerd[1473]: time="2024-12-13T09:12:22.652809459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 09:12:22.680334 containerd[1473]: time="2024-12-13T09:12:22.679281721Z" level=info msg="StartContainer for \"25320564a935a4bff79503f1b453db3f6107732052f367394e095b65717bd1e8\" returns successfully"
Dec 13 09:12:22.735275 systemd[1]: Started cri-containerd-8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0.scope - libcontainer container 8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0.
Dec 13 09:12:23.020454 containerd[1473]: time="2024-12-13T09:12:23.020136818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c78887f5b-pnmjx,Uid:94ad6c01-b7b0-4277-bf3f-b065b6556e24,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809\""
Dec 13 09:12:23.057656 containerd[1473]: time="2024-12-13T09:12:23.055727766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55499f54f6-hpbbq,Uid:88388b59-0e34-45d0-b4c1-5da85c561522,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0\""
Dec 13 09:12:23.399391 systemd-networkd[1371]: cali454f48b5050: Gained IPv6LL
Dec 13 09:12:23.608725 kubelet[2524]: E1213 09:12:23.606803    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:23.638981 kubelet[2524]: I1213 09:12:23.638054    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sjqdz" podStartSLOduration=45.638013988 podStartE2EDuration="45.638013988s" podCreationTimestamp="2024-12-13 09:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:23.636319674 +0000 UTC m=+51.087473308" watchObservedRunningTime="2024-12-13 09:12:23.638013988 +0000 UTC m=+51.089167616"
Dec 13 09:12:23.974845 systemd-networkd[1371]: cali168385a15a8: Gained IPv6LL
Dec 13 09:12:24.276066 containerd[1473]: time="2024-12-13T09:12:24.274226591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:24.277259 containerd[1473]: time="2024-12-13T09:12:24.277161859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404"
Dec 13 09:12:24.278075 containerd[1473]: time="2024-12-13T09:12:24.278008499Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:24.283160 containerd[1473]: time="2024-12-13T09:12:24.283086721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:24.284811 containerd[1473]: time="2024-12-13T09:12:24.284668903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 6.309960866s"
Dec 13 09:12:24.284811 containerd[1473]: time="2024-12-13T09:12:24.284739407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 09:12:24.287323 containerd[1473]: time="2024-12-13T09:12:24.287078726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Dec 13 09:12:24.288870 containerd[1473]: time="2024-12-13T09:12:24.288772466Z" level=info msg="CreateContainer within sandbox \"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 09:12:24.313632 containerd[1473]: time="2024-12-13T09:12:24.312823365Z" level=info msg="CreateContainer within sandbox \"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb2f21ce8891afa8ab0c33d8fb46997399eb650762db46c3d4c3eed4c65d8cf4\""
Dec 13 09:12:24.315199 containerd[1473]: time="2024-12-13T09:12:24.315138792Z" level=info msg="StartContainer for \"fb2f21ce8891afa8ab0c33d8fb46997399eb650762db46c3d4c3eed4c65d8cf4\""
Dec 13 09:12:24.399003 systemd[1]: Started cri-containerd-fb2f21ce8891afa8ab0c33d8fb46997399eb650762db46c3d4c3eed4c65d8cf4.scope - libcontainer container fb2f21ce8891afa8ab0c33d8fb46997399eb650762db46c3d4c3eed4c65d8cf4.
Dec 13 09:12:24.493157 containerd[1473]: time="2024-12-13T09:12:24.493057410Z" level=info msg="StartContainer for \"fb2f21ce8891afa8ab0c33d8fb46997399eb650762db46c3d4c3eed4c65d8cf4\" returns successfully"
Dec 13 09:12:24.631724 kubelet[2524]: E1213 09:12:24.631533    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:25.634565 kubelet[2524]: E1213 09:12:25.634389    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:25.664019 systemd[1]: Started sshd@12-165.232.145.99:22-147.75.109.163:57136.service - OpenSSH per-connection server daemon (147.75.109.163:57136).
Dec 13 09:12:25.826905 sshd[4610]: Accepted publickey for core from 147.75.109.163 port 57136 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:25.828607 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:25.848989 systemd-logind[1450]: New session 10 of user core.
Dec 13 09:12:25.860444 systemd[1]: Started session-10.scope - Session 10 of User core.
Dec 13 09:12:26.402841 kubelet[2524]: I1213 09:12:26.401102    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c78887f5b-2s422" podStartSLOduration=32.088754508 podStartE2EDuration="38.401077841s" podCreationTimestamp="2024-12-13 09:11:48 +0000 UTC" firstStartedPulling="2024-12-13 09:12:17.973548943 +0000 UTC m=+45.424702564" lastFinishedPulling="2024-12-13 09:12:24.285872273 +0000 UTC m=+51.737025897" observedRunningTime="2024-12-13 09:12:24.657779329 +0000 UTC m=+52.108932966" watchObservedRunningTime="2024-12-13 09:12:26.401077841 +0000 UTC m=+53.852231467"
Dec 13 09:12:26.859231 sshd[4610]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:26.881776 systemd[1]: sshd@12-165.232.145.99:22-147.75.109.163:57136.service: Deactivated successfully.
Dec 13 09:12:26.887886 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 09:12:26.893099 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit.
Dec 13 09:12:26.908697 systemd[1]: Started sshd@13-165.232.145.99:22-147.75.109.163:42098.service - OpenSSH per-connection server daemon (147.75.109.163:42098).
Dec 13 09:12:26.920459 systemd-logind[1450]: Removed session 10.
Dec 13 09:12:26.972553 containerd[1473]: time="2024-12-13T09:12:26.970532993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:26.975761 containerd[1473]: time="2024-12-13T09:12:26.975556395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Dec 13 09:12:26.978748 containerd[1473]: time="2024-12-13T09:12:26.978526892Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:26.987059 containerd[1473]: time="2024-12-13T09:12:26.986842408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:26.993984 containerd[1473]: time="2024-12-13T09:12:26.993660849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.706523339s"
Dec 13 09:12:26.994603 containerd[1473]: time="2024-12-13T09:12:26.993859459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Dec 13 09:12:27.000985 containerd[1473]: time="2024-12-13T09:12:27.000462082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 09:12:27.008995 containerd[1473]: time="2024-12-13T09:12:27.008926665Z" level=info msg="CreateContainer within sandbox \"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Dec 13 09:12:27.065481 sshd[4631]: Accepted publickey for core from 147.75.109.163 port 42098 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:27.071766 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:27.094981 systemd-logind[1450]: New session 11 of user core.
Dec 13 09:12:27.097818 systemd[1]: Started session-11.scope - Session 11 of User core.
Dec 13 09:12:27.139850 containerd[1473]: time="2024-12-13T09:12:27.139682563Z" level=info msg="CreateContainer within sandbox \"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"669fbd2096b43b47d2155a2929d700d8a97fc7a1f3fb44d284cf1a468346f933\""
Dec 13 09:12:27.145463 containerd[1473]: time="2024-12-13T09:12:27.143807915Z" level=info msg="StartContainer for \"669fbd2096b43b47d2155a2929d700d8a97fc7a1f3fb44d284cf1a468346f933\""
Dec 13 09:12:27.278351 systemd[1]: Started cri-containerd-669fbd2096b43b47d2155a2929d700d8a97fc7a1f3fb44d284cf1a468346f933.scope - libcontainer container 669fbd2096b43b47d2155a2929d700d8a97fc7a1f3fb44d284cf1a468346f933.
Dec 13 09:12:27.425776 containerd[1473]: time="2024-12-13T09:12:27.425586712Z" level=info msg="StartContainer for \"669fbd2096b43b47d2155a2929d700d8a97fc7a1f3fb44d284cf1a468346f933\" returns successfully"
Dec 13 09:12:27.517653 containerd[1473]: time="2024-12-13T09:12:27.517575664Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:27.533222 containerd[1473]: time="2024-12-13T09:12:27.533080774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Dec 13 09:12:27.540358 containerd[1473]: time="2024-12-13T09:12:27.540273214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 539.742093ms"
Dec 13 09:12:27.540358 containerd[1473]: time="2024-12-13T09:12:27.540348570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 09:12:27.554300 containerd[1473]: time="2024-12-13T09:12:27.550419825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Dec 13 09:12:27.566835 containerd[1473]: time="2024-12-13T09:12:27.563675280Z" level=info msg="CreateContainer within sandbox \"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 09:12:27.589587 containerd[1473]: time="2024-12-13T09:12:27.589506388Z" level=info msg="CreateContainer within sandbox \"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98566add49b526828439bc03b2366dfcf171f483eb3f57b3e255b86785fc70cf\""
Dec 13 09:12:27.590584 containerd[1473]: time="2024-12-13T09:12:27.590377791Z" level=info msg="StartContainer for \"98566add49b526828439bc03b2366dfcf171f483eb3f57b3e255b86785fc70cf\""
Dec 13 09:12:27.767608 systemd[1]: Started cri-containerd-98566add49b526828439bc03b2366dfcf171f483eb3f57b3e255b86785fc70cf.scope - libcontainer container 98566add49b526828439bc03b2366dfcf171f483eb3f57b3e255b86785fc70cf.
Dec 13 09:12:27.798719 sshd[4631]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:27.818512 systemd[1]: sshd@13-165.232.145.99:22-147.75.109.163:42098.service: Deactivated successfully.
Dec 13 09:12:27.827874 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 09:12:27.836644 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit.
Dec 13 09:12:27.850140 systemd[1]: Started sshd@14-165.232.145.99:22-147.75.109.163:42112.service - OpenSSH per-connection server daemon (147.75.109.163:42112).
Dec 13 09:12:27.874341 systemd-logind[1450]: Removed session 11.
Dec 13 09:12:27.992441 sshd[4701]: Accepted publickey for core from 147.75.109.163 port 42112 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:28.002741 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:28.044470 systemd-logind[1450]: New session 12 of user core.
Dec 13 09:12:28.051559 systemd[1]: Started session-12.scope - Session 12 of User core.
Dec 13 09:12:28.081591 containerd[1473]: time="2024-12-13T09:12:28.081512070Z" level=info msg="StartContainer for \"98566add49b526828439bc03b2366dfcf171f483eb3f57b3e255b86785fc70cf\" returns successfully"
Dec 13 09:12:28.470931 sshd[4701]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:28.478461 systemd[1]: sshd@14-165.232.145.99:22-147.75.109.163:42112.service: Deactivated successfully.
Dec 13 09:12:28.483574 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 09:12:28.486864 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit.
Dec 13 09:12:28.490249 systemd-logind[1450]: Removed session 12.
Dec 13 09:12:29.766050 kubelet[2524]: I1213 09:12:29.764848    2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 09:12:30.494504 systemd[1]: Started sshd@15-165.232.145.99:22-218.92.0.166:63287.service - OpenSSH per-connection server daemon (218.92.0.166:63287).
Dec 13 09:12:31.344721 containerd[1473]: time="2024-12-13T09:12:31.343405006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:31.346038 containerd[1473]: time="2024-12-13T09:12:31.345943670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192"
Dec 13 09:12:31.347478 containerd[1473]: time="2024-12-13T09:12:31.347415542Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:31.351104 containerd[1473]: time="2024-12-13T09:12:31.351015814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:31.352290 containerd[1473]: time="2024-12-13T09:12:31.352233099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.801741746s"
Dec 13 09:12:31.352563 containerd[1473]: time="2024-12-13T09:12:31.352445064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\""
Dec 13 09:12:31.355138 containerd[1473]: time="2024-12-13T09:12:31.354817996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Dec 13 09:12:31.387012 containerd[1473]: time="2024-12-13T09:12:31.386924741Z" level=info msg="CreateContainer within sandbox \"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Dec 13 09:12:31.417857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560261544.mount: Deactivated successfully.
Dec 13 09:12:31.434556 containerd[1473]: time="2024-12-13T09:12:31.434475920Z" level=info msg="CreateContainer within sandbox \"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a\""
Dec 13 09:12:31.437997 containerd[1473]: time="2024-12-13T09:12:31.437920148Z" level=info msg="StartContainer for \"254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a\""
Dec 13 09:12:31.507636 systemd[1]: Started cri-containerd-254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a.scope - libcontainer container 254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a.
Dec 13 09:12:31.607892 containerd[1473]: time="2024-12-13T09:12:31.607655256Z" level=info msg="StartContainer for \"254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a\" returns successfully"
Dec 13 09:12:31.825897 kubelet[2524]: I1213 09:12:31.825577    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55499f54f6-hpbbq" podStartSLOduration=35.534235213 podStartE2EDuration="43.825545076s" podCreationTimestamp="2024-12-13 09:11:48 +0000 UTC" firstStartedPulling="2024-12-13 09:12:23.062892853 +0000 UTC m=+50.514046458" lastFinishedPulling="2024-12-13 09:12:31.354202716 +0000 UTC m=+58.805356321" observedRunningTime="2024-12-13 09:12:31.821217132 +0000 UTC m=+59.272370832" watchObservedRunningTime="2024-12-13 09:12:31.825545076 +0000 UTC m=+59.276698718"
Dec 13 09:12:31.825897 kubelet[2524]: I1213 09:12:31.825719    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c78887f5b-pnmjx" podStartSLOduration=40.309091846 podStartE2EDuration="44.825709865s" podCreationTimestamp="2024-12-13 09:11:47 +0000 UTC" firstStartedPulling="2024-12-13 09:12:23.032895616 +0000 UTC m=+50.484049240" lastFinishedPulling="2024-12-13 09:12:27.549513631 +0000 UTC m=+55.000667259" observedRunningTime="2024-12-13 09:12:28.786100963 +0000 UTC m=+56.237254601" watchObservedRunningTime="2024-12-13 09:12:31.825709865 +0000 UTC m=+59.276863493"
Dec 13 09:12:31.867199 sshd[4781]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:12:32.792470 containerd[1473]: time="2024-12-13T09:12:32.792413940Z" level=info msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\""
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.013 [WARNING][4821] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df5dd674-9dba-48dd-acc0-496e39d2ef18", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1", Pod:"coredns-6f6b679f8f-hpjnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a197a967d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.017 [INFO][4821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.017 [INFO][4821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" iface="eth0" netns=""
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.017 [INFO][4821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.017 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.159 [INFO][4828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.163 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.164 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.191 [WARNING][4828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.192 [INFO][4828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.202 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:33.212064 containerd[1473]: 2024-12-13 09:12:33.205 [INFO][4821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.212064 containerd[1473]: time="2024-12-13T09:12:33.210547172Z" level=info msg="TearDown network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" successfully"
Dec 13 09:12:33.212064 containerd[1473]: time="2024-12-13T09:12:33.210609139Z" level=info msg="StopPodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" returns successfully"
Dec 13 09:12:33.216202 containerd[1473]: time="2024-12-13T09:12:33.215612428Z" level=info msg="RemovePodSandbox for \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\""
Dec 13 09:12:33.221803 containerd[1473]: time="2024-12-13T09:12:33.221604917Z" level=info msg="Forcibly stopping sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\""
Dec 13 09:12:33.309806 containerd[1473]: time="2024-12-13T09:12:33.309158509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:33.315980 containerd[1473]: time="2024-12-13T09:12:33.315171106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Dec 13 09:12:33.326744 containerd[1473]: time="2024-12-13T09:12:33.326250945Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:33.351262 containerd[1473]: time="2024-12-13T09:12:33.351163341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 09:12:33.353807 containerd[1473]: time="2024-12-13T09:12:33.353736621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.99885838s"
Dec 13 09:12:33.357281 containerd[1473]: time="2024-12-13T09:12:33.356759358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Dec 13 09:12:33.378905 containerd[1473]: time="2024-12-13T09:12:33.378736621Z" level=info msg="CreateContainer within sandbox \"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Dec 13 09:12:33.462276 containerd[1473]: time="2024-12-13T09:12:33.461314347Z" level=info msg="CreateContainer within sandbox \"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"06b2b9bf631f89cf703b4bd1fa9469236b7f869681ba7be917366804dce60814\""
Dec 13 09:12:33.467425 containerd[1473]: time="2024-12-13T09:12:33.467124192Z" level=info msg="StartContainer for \"06b2b9bf631f89cf703b4bd1fa9469236b7f869681ba7be917366804dce60814\""
Dec 13 09:12:33.505585 systemd[1]: Started sshd@16-165.232.145.99:22-147.75.109.163:42124.service - OpenSSH per-connection server daemon (147.75.109.163:42124).
Dec 13 09:12:33.652537 systemd[1]: run-containerd-runc-k8s.io-254ac80aeeef4abe564e405f24ca2a3d0d6318f25fb1d558fd2155c523e2742a-runc.0FHWte.mount: Deactivated successfully.
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.385 [WARNING][4849] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df5dd674-9dba-48dd-acc0-496e39d2ef18", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"574af4f9fbd2559d3995571e58a8b358d33f3154213bdf19157f347ea3302cd1", Pod:"coredns-6f6b679f8f-hpjnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a197a967d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.385 [INFO][4849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.385 [INFO][4849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" iface="eth0" netns=""
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.385 [INFO][4849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.386 [INFO][4849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.590 [INFO][4856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.594 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.594 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.658 [WARNING][4856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.658 [INFO][4856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" HandleID="k8s-pod-network.ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--hpjnn-eth0"
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.668 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:33.681956 containerd[1473]: 2024-12-13 09:12:33.678 [INFO][4849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0"
Dec 13 09:12:33.684561 containerd[1473]: time="2024-12-13T09:12:33.682828569Z" level=info msg="TearDown network for sandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" successfully"
Dec 13 09:12:33.691343 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 42124 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:33.701673 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:33.721618 systemd-logind[1450]: New session 13 of user core.
Dec 13 09:12:33.725371 systemd[1]: Started session-13.scope - Session 13 of User core.
Dec 13 09:12:33.748744 containerd[1473]: time="2024-12-13T09:12:33.748595231Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:33.768901 systemd[1]: Started cri-containerd-06b2b9bf631f89cf703b4bd1fa9469236b7f869681ba7be917366804dce60814.scope - libcontainer container 06b2b9bf631f89cf703b4bd1fa9469236b7f869681ba7be917366804dce60814.
Dec 13 09:12:33.816393 containerd[1473]: time="2024-12-13T09:12:33.815764653Z" level=info msg="RemovePodSandbox \"ba72dca9dc93b38d4868c9d8b93508546859aa84f4e9701d009673bd92a348d0\" returns successfully"
Dec 13 09:12:33.819525 containerd[1473]: time="2024-12-13T09:12:33.819319086Z" level=info msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\""
Dec 13 09:12:33.924422 containerd[1473]: time="2024-12-13T09:12:33.924298280Z" level=info msg="StartContainer for \"06b2b9bf631f89cf703b4bd1fa9469236b7f869681ba7be917366804dce60814\" returns successfully"
Dec 13 09:12:33.978699 sshd[4742]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:33.954 [WARNING][4927] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0399f05a-b42a-4620-afd9-27c69d03846d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57", Pod:"csi-node-driver-bbgp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b52e62af2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:33.956 [INFO][4927] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:33.956 [INFO][4927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" iface="eth0" netns=""
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:33.957 [INFO][4927] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:33.957 [INFO][4927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.038 [INFO][4951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.039 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.039 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.053 [WARNING][4951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.053 [INFO][4951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.061 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:34.067545 containerd[1473]: 2024-12-13 09:12:34.064 [INFO][4927] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.076113 containerd[1473]: time="2024-12-13T09:12:34.067628338Z" level=info msg="TearDown network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" successfully"
Dec 13 09:12:34.076113 containerd[1473]: time="2024-12-13T09:12:34.067672765Z" level=info msg="StopPodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" returns successfully"
Dec 13 09:12:34.076113 containerd[1473]: time="2024-12-13T09:12:34.070993078Z" level=info msg="RemovePodSandbox for \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\""
Dec 13 09:12:34.076113 containerd[1473]: time="2024-12-13T09:12:34.071062565Z" level=info msg="Forcibly stopping sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\""
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.139 [WARNING][4972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0399f05a-b42a-4620-afd9-27c69d03846d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"860b6989011839b920c3c6c0195a32f78bbe3102bee76e2d1a7ff39794147b57", Pod:"csi-node-driver-bbgp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b52e62af2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.139 [INFO][4972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.139 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" iface="eth0" netns=""
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.139 [INFO][4972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.139 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.192 [INFO][4979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.195 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.197 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.210 [WARNING][4979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.211 [INFO][4979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" HandleID="k8s-pod-network.d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5" Workload="ci--4081.2.1--7--516c4b3017-k8s-csi--node--driver--bbgp8-eth0"
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.216 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:34.226147 containerd[1473]: 2024-12-13 09:12:34.220 [INFO][4972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5"
Dec 13 09:12:34.226147 containerd[1473]: time="2024-12-13T09:12:34.224563289Z" level=info msg="TearDown network for sandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" successfully"
Dec 13 09:12:34.230114 containerd[1473]: time="2024-12-13T09:12:34.229652594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:34.230114 containerd[1473]: time="2024-12-13T09:12:34.229770992Z" level=info msg="RemovePodSandbox \"d2617074b5fc9e2285bd3a0b37372b28c0c9a4e69b28e6c860d129188b8c2ea5\" returns successfully"
Dec 13 09:12:34.233426 containerd[1473]: time="2024-12-13T09:12:34.231472368Z" level=info msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\""
Dec 13 09:12:34.333500 sshd[4983]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.322 [WARNING][5000] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0", GenerateName:"calico-kube-controllers-55499f54f6-", Namespace:"calico-system", SelfLink:"", UID:"88388b59-0e34-45d0-b4c1-5da85c561522", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55499f54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0", Pod:"calico-kube-controllers-55499f54f6-hpbbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454f48b5050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.323 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.324 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" iface="eth0" netns=""
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.324 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.324 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.371 [INFO][5006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.371 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.371 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.383 [WARNING][5006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.383 [INFO][5006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.390 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:34.398899 containerd[1473]: 2024-12-13 09:12:34.394 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.398899 containerd[1473]: time="2024-12-13T09:12:34.398312533Z" level=info msg="TearDown network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" successfully"
Dec 13 09:12:34.398899 containerd[1473]: time="2024-12-13T09:12:34.398380824Z" level=info msg="StopPodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" returns successfully"
Dec 13 09:12:34.402951 containerd[1473]: time="2024-12-13T09:12:34.400933338Z" level=info msg="RemovePodSandbox for \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\""
Dec 13 09:12:34.402951 containerd[1473]: time="2024-12-13T09:12:34.400996617Z" level=info msg="Forcibly stopping sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\""
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.476 [WARNING][5024] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0", GenerateName:"calico-kube-controllers-55499f54f6-", Namespace:"calico-system", SelfLink:"", UID:"88388b59-0e34-45d0-b4c1-5da85c561522", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55499f54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8b6e792a666f5a6d4474f87f4546c118b4bd5ce2e301c2f8bfafe643928202a0", Pod:"calico-kube-controllers-55499f54f6-hpbbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454f48b5050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.477 [INFO][5024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.477 [INFO][5024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" iface="eth0" netns=""
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.477 [INFO][5024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.477 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.535 [INFO][5031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.539 [INFO][5031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.539 [INFO][5031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.555 [WARNING][5031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.556 [INFO][5031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" HandleID="k8s-pod-network.6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--kube--controllers--55499f54f6--hpbbq-eth0"
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.563 [INFO][5031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:34.569548 containerd[1473]: 2024-12-13 09:12:34.567 [INFO][5024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b"
Dec 13 09:12:34.572897 containerd[1473]: time="2024-12-13T09:12:34.570458859Z" level=info msg="TearDown network for sandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" successfully"
Dec 13 09:12:34.588249 containerd[1473]: time="2024-12-13T09:12:34.587999179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:34.588689 containerd[1473]: time="2024-12-13T09:12:34.588520546Z" level=info msg="RemovePodSandbox \"6a1d7b00478d28fa9caa235be8bbe6596f765aaf65b0c90482e21920db7ac76b\" returns successfully"
Dec 13 09:12:34.590890 containerd[1473]: time="2024-12-13T09:12:34.589994891Z" level=info msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\""
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.686 [WARNING][5053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ca8008c-9c95-43aa-8201-4f1e59b8ea10", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e", Pod:"coredns-6f6b679f8f-sjqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd407bfdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.687 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.687 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" iface="eth0" netns=""
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.687 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.687 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.746 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.748 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.748 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.786 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.787 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.798 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:34.811501 containerd[1473]: 2024-12-13 09:12:34.805 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:34.813477 containerd[1473]: time="2024-12-13T09:12:34.811750817Z" level=info msg="TearDown network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" successfully"
Dec 13 09:12:34.813477 containerd[1473]: time="2024-12-13T09:12:34.811789631Z" level=info msg="StopPodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" returns successfully"
Dec 13 09:12:34.813477 containerd[1473]: time="2024-12-13T09:12:34.812640066Z" level=info msg="RemovePodSandbox for \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\""
Dec 13 09:12:34.813477 containerd[1473]: time="2024-12-13T09:12:34.812689703Z" level=info msg="Forcibly stopping sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\""
Dec 13 09:12:34.928912 sshd[4863]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:34.949999 systemd[1]: sshd@16-165.232.145.99:22-147.75.109.163:42124.service: Deactivated successfully.
Dec 13 09:12:34.959628 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 09:12:34.969984 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit.
Dec 13 09:12:34.978884 systemd-logind[1450]: Removed session 13.
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.065 [WARNING][5078] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ca8008c-9c95-43aa-8201-4f1e59b8ea10", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"4473ecaebb4148c4fbe6f5ffe7c4b039f9b30cce241d61d7b006a8ca9fee2e5e", Pod:"coredns-6f6b679f8f-sjqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd407bfdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.068 [INFO][5078] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.068 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" iface="eth0" netns=""
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.069 [INFO][5078] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.069 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.137 [INFO][5087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.137 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.137 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.151 [WARNING][5087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.151 [INFO][5087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" HandleID="k8s-pod-network.6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a" Workload="ci--4081.2.1--7--516c4b3017-k8s-coredns--6f6b679f8f--sjqdz-eth0"
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.154 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:35.162313 containerd[1473]: 2024-12-13 09:12:35.158 [INFO][5078] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a"
Dec 13 09:12:35.164385 containerd[1473]: time="2024-12-13T09:12:35.162387126Z" level=info msg="TearDown network for sandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" successfully"
Dec 13 09:12:35.169275 containerd[1473]: time="2024-12-13T09:12:35.168986920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:35.169275 containerd[1473]: time="2024-12-13T09:12:35.169117960Z" level=info msg="RemovePodSandbox \"6f4016b4f8a18f51a7b78d875198adc689dc2ecbb9cf2c79c6331ffda645991a\" returns successfully"
Dec 13 09:12:35.170636 containerd[1473]: time="2024-12-13T09:12:35.169847215Z" level=info msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\""
Dec 13 09:12:35.192233 kubelet[2524]: I1213 09:12:35.191776    2524 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Dec 13 09:12:35.205393 kubelet[2524]: I1213 09:12:35.205234    2524 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.253 [WARNING][5105] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ad6c01-b7b0-4277-bf3f-b065b6556e24", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809", Pod:"calico-apiserver-7c78887f5b-pnmjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali168385a15a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.253 [INFO][5105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.253 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" iface="eth0" netns=""
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.253 [INFO][5105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.253 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.301 [INFO][5112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.302 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.302 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.317 [WARNING][5112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.318 [INFO][5112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.321 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:35.333478 containerd[1473]: 2024-12-13 09:12:35.327 [INFO][5105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.336702 containerd[1473]: time="2024-12-13T09:12:35.335092698Z" level=info msg="TearDown network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" successfully"
Dec 13 09:12:35.336702 containerd[1473]: time="2024-12-13T09:12:35.335293130Z" level=info msg="StopPodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" returns successfully"
Dec 13 09:12:35.338514 containerd[1473]: time="2024-12-13T09:12:35.338346868Z" level=info msg="RemovePodSandbox for \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\""
Dec 13 09:12:35.338514 containerd[1473]: time="2024-12-13T09:12:35.338403459Z" level=info msg="Forcibly stopping sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\""
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.404 [WARNING][5130] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ad6c01-b7b0-4277-bf3f-b065b6556e24", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"8de4a393b47f44a73774b033ab67c0cc1d6ead319e2e03a001c92ee1beec5809", Pod:"calico-apiserver-7c78887f5b-pnmjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali168385a15a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.405 [INFO][5130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.405 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" iface="eth0" netns=""
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.405 [INFO][5130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.405 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.446 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.446 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.446 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.459 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.459 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" HandleID="k8s-pod-network.90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--pnmjx-eth0"
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.463 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:35.470481 containerd[1473]: 2024-12-13 09:12:35.467 [INFO][5130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395"
Dec 13 09:12:35.470481 containerd[1473]: time="2024-12-13T09:12:35.469920501Z" level=info msg="TearDown network for sandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" successfully"
Dec 13 09:12:35.486668 containerd[1473]: time="2024-12-13T09:12:35.486572384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:35.487015 containerd[1473]: time="2024-12-13T09:12:35.486711688Z" level=info msg="RemovePodSandbox \"90e71d6fb590e1a75a576b51b28790b629ea987978bc73ebadc69e17c860a395\" returns successfully"
Dec 13 09:12:35.488683 containerd[1473]: time="2024-12-13T09:12:35.488628086Z" level=info msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\""
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.563 [WARNING][5154] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a2df62-2b3b-4b64-8737-3c196ad7319a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175", Pod:"calico-apiserver-7c78887f5b-2s422", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfd3aeebda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.564 [INFO][5154] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.564 [INFO][5154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" iface="eth0" netns=""
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.564 [INFO][5154] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.564 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.598 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.599 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.599 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.608 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.608 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.616 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:35.622861 containerd[1473]: 2024-12-13 09:12:35.619 [INFO][5154] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.622861 containerd[1473]: time="2024-12-13T09:12:35.622445065Z" level=info msg="TearDown network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" successfully"
Dec 13 09:12:35.622861 containerd[1473]: time="2024-12-13T09:12:35.622478257Z" level=info msg="StopPodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" returns successfully"
Dec 13 09:12:35.624836 containerd[1473]: time="2024-12-13T09:12:35.624342298Z" level=info msg="RemovePodSandbox for \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\""
Dec 13 09:12:35.624836 containerd[1473]: time="2024-12-13T09:12:35.624388070Z" level=info msg="Forcibly stopping sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\""
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.687 [WARNING][5180] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0", GenerateName:"calico-apiserver-7c78887f5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a2df62-2b3b-4b64-8737-3c196ad7319a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c78887f5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-7-516c4b3017", ContainerID:"524cbdd210555469312c6539a4820c36670b9594828c6982431454d1e4595175", Pod:"calico-apiserver-7c78887f5b-2s422", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfd3aeebda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.687 [INFO][5180] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.687 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" iface="eth0" netns=""
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.687 [INFO][5180] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.687 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.726 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.726 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.726 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.741 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.741 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" HandleID="k8s-pod-network.f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662" Workload="ci--4081.2.1--7--516c4b3017-k8s-calico--apiserver--7c78887f5b--2s422-eth0"
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.744 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 09:12:35.750174 containerd[1473]: 2024-12-13 09:12:35.746 [INFO][5180] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662"
Dec 13 09:12:35.750174 containerd[1473]: time="2024-12-13T09:12:35.748725491Z" level=info msg="TearDown network for sandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" successfully"
Dec 13 09:12:35.762657 containerd[1473]: time="2024-12-13T09:12:35.762506383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 09:12:35.762657 containerd[1473]: time="2024-12-13T09:12:35.762675072Z" level=info msg="RemovePodSandbox \"f05121bfade7025e7596d3e24d1bf87410a6bc4369034f4bdbf54aba4f936662\" returns successfully"
Dec 13 09:12:35.856215 sshd[4742]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:12:36.212287 sshd[5193]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166  user=root
Dec 13 09:12:38.015463 sshd[4742]: PAM: Permission denied for root from 218.92.0.166
Dec 13 09:12:38.192067 sshd[4742]: Received disconnect from 218.92.0.166 port 63287:11:  [preauth]
Dec 13 09:12:38.192067 sshd[4742]: Disconnected from authenticating user root 218.92.0.166 port 63287 [preauth]
Dec 13 09:12:38.196157 systemd[1]: sshd@15-165.232.145.99:22-218.92.0.166:63287.service: Deactivated successfully.
Dec 13 09:12:39.950636 systemd[1]: Started sshd@17-165.232.145.99:22-147.75.109.163:34648.service - OpenSSH per-connection server daemon (147.75.109.163:34648).
Dec 13 09:12:40.079314 sshd[5200]: Accepted publickey for core from 147.75.109.163 port 34648 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:40.082292 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:40.089909 systemd-logind[1450]: New session 14 of user core.
Dec 13 09:12:40.097448 systemd[1]: Started session-14.scope - Session 14 of User core.
Dec 13 09:12:40.270765 systemd[1]: Started sshd@18-165.232.145.99:22-51.195.220.128:34546.service - OpenSSH per-connection server daemon (51.195.220.128:34546).
Dec 13 09:12:40.522308 sshd[5200]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:40.531822 systemd[1]: sshd@17-165.232.145.99:22-147.75.109.163:34648.service: Deactivated successfully.
Dec 13 09:12:40.537004 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 09:12:40.538593 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit.
Dec 13 09:12:40.541198 systemd-logind[1450]: Removed session 14.
Dec 13 09:12:41.066565 sshd[5210]: Invalid user user13 from 51.195.220.128 port 34546
Dec 13 09:12:41.213747 sshd[5210]: Received disconnect from 51.195.220.128 port 34546:11: Bye Bye [preauth]
Dec 13 09:12:41.213747 sshd[5210]: Disconnected from invalid user user13 51.195.220.128 port 34546 [preauth]
Dec 13 09:12:41.216535 systemd[1]: sshd@18-165.232.145.99:22-51.195.220.128:34546.service: Deactivated successfully.
Dec 13 09:12:45.544530 systemd[1]: Started sshd@19-165.232.145.99:22-147.75.109.163:34658.service - OpenSSH per-connection server daemon (147.75.109.163:34658).
Dec 13 09:12:45.629786 sshd[5226]: Accepted publickey for core from 147.75.109.163 port 34658 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:45.632492 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:45.642826 systemd-logind[1450]: New session 15 of user core.
Dec 13 09:12:45.649886 systemd[1]: Started session-15.scope - Session 15 of User core.
Dec 13 09:12:45.864931 kubelet[2524]: E1213 09:12:45.864627    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:45.905162 kubelet[2524]: I1213 09:12:45.905081    2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bbgp8" podStartSLOduration=45.02322384 podStartE2EDuration="57.905058591s" podCreationTimestamp="2024-12-13 09:11:48 +0000 UTC" firstStartedPulling="2024-12-13 09:12:20.480863081 +0000 UTC m=+47.932016694" lastFinishedPulling="2024-12-13 09:12:33.362697817 +0000 UTC m=+60.813851445" observedRunningTime="2024-12-13 09:12:35.006403993 +0000 UTC m=+62.457557617" watchObservedRunningTime="2024-12-13 09:12:45.905058591 +0000 UTC m=+73.356212214"
Dec 13 09:12:45.924484 sshd[5226]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:45.955363 systemd[1]: sshd@19-165.232.145.99:22-147.75.109.163:34658.service: Deactivated successfully.
Dec 13 09:12:45.960528 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 09:12:45.965353 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit.
Dec 13 09:12:45.967483 systemd-logind[1450]: Removed session 15.
Dec 13 09:12:50.958282 systemd[1]: Started sshd@20-165.232.145.99:22-147.75.109.163:52362.service - OpenSSH per-connection server daemon (147.75.109.163:52362).
Dec 13 09:12:51.117001 sshd[5262]: Accepted publickey for core from 147.75.109.163 port 52362 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:51.121565 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:51.138521 systemd-logind[1450]: New session 16 of user core.
Dec 13 09:12:51.147578 systemd[1]: Started session-16.scope - Session 16 of User core.
Dec 13 09:12:51.727861 sshd[5262]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:51.751312 systemd[1]: sshd@20-165.232.145.99:22-147.75.109.163:52362.service: Deactivated successfully.
Dec 13 09:12:51.762379 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 09:12:51.767294 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit.
Dec 13 09:12:51.780271 systemd[1]: Started sshd@21-165.232.145.99:22-147.75.109.163:52368.service - OpenSSH per-connection server daemon (147.75.109.163:52368).
Dec 13 09:12:51.784279 systemd-logind[1450]: Removed session 16.
Dec 13 09:12:51.888176 sshd[5274]: Accepted publickey for core from 147.75.109.163 port 52368 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:51.889718 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:51.901584 systemd-logind[1450]: New session 17 of user core.
Dec 13 09:12:51.908635 systemd[1]: Started session-17.scope - Session 17 of User core.
Dec 13 09:12:52.554607 sshd[5274]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:52.578216 systemd[1]: Started sshd@22-165.232.145.99:22-147.75.109.163:52382.service - OpenSSH per-connection server daemon (147.75.109.163:52382).
Dec 13 09:12:52.583641 systemd[1]: sshd@21-165.232.145.99:22-147.75.109.163:52368.service: Deactivated successfully.
Dec 13 09:12:52.590895 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 09:12:52.596065 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit.
Dec 13 09:12:52.603776 systemd-logind[1450]: Removed session 17.
Dec 13 09:12:52.798994 sshd[5284]: Accepted publickey for core from 147.75.109.163 port 52382 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:52.804674 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:52.879135 systemd-logind[1450]: New session 18 of user core.
Dec 13 09:12:52.902441 systemd[1]: Started session-18.scope - Session 18 of User core.
Dec 13 09:12:55.738792 kubelet[2524]: E1213 09:12:55.738722    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:12:57.548371 sshd[5284]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:57.576986 systemd[1]: Started sshd@23-165.232.145.99:22-147.75.109.163:36792.service - OpenSSH per-connection server daemon (147.75.109.163:36792).
Dec 13 09:12:57.583651 systemd[1]: sshd@22-165.232.145.99:22-147.75.109.163:52382.service: Deactivated successfully.
Dec 13 09:12:57.600160 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 09:12:57.600522 systemd[1]: session-18.scope: Consumed 1.003s CPU time.
Dec 13 09:12:57.610848 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit.
Dec 13 09:12:57.623715 systemd-logind[1450]: Removed session 18.
Dec 13 09:12:57.747636 sshd[5320]: Accepted publickey for core from 147.75.109.163 port 36792 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:57.753903 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:57.776607 systemd-logind[1450]: New session 19 of user core.
Dec 13 09:12:57.783364 systemd[1]: Started session-19.scope - Session 19 of User core.
Dec 13 09:12:59.109930 kubelet[2524]: I1213 09:12:59.108563    2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 09:12:59.245163 sshd[5320]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:59.268740 systemd[1]: sshd@23-165.232.145.99:22-147.75.109.163:36792.service: Deactivated successfully.
Dec 13 09:12:59.282720 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 09:12:59.292188 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit.
Dec 13 09:12:59.314922 systemd[1]: Started sshd@24-165.232.145.99:22-147.75.109.163:36806.service - OpenSSH per-connection server daemon (147.75.109.163:36806).
Dec 13 09:12:59.333389 systemd-logind[1450]: Removed session 19.
Dec 13 09:12:59.571065 sshd[5337]: Accepted publickey for core from 147.75.109.163 port 36806 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:12:59.573793 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:12:59.595718 systemd-logind[1450]: New session 20 of user core.
Dec 13 09:12:59.602303 systemd[1]: Started session-20.scope - Session 20 of User core.
Dec 13 09:12:59.923080 sshd[5337]: pam_unix(sshd:session): session closed for user core
Dec 13 09:12:59.934547 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit.
Dec 13 09:12:59.936322 systemd[1]: sshd@24-165.232.145.99:22-147.75.109.163:36806.service: Deactivated successfully.
Dec 13 09:12:59.942347 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 09:12:59.946259 systemd-logind[1450]: Removed session 20.
Dec 13 09:13:03.057566 kubelet[2524]: E1213 09:13:02.971271    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:13:03.705001 kubelet[2524]: E1213 09:13:03.703725    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:13:04.951002 systemd[1]: Started sshd@25-165.232.145.99:22-147.75.109.163:36820.service - OpenSSH per-connection server daemon (147.75.109.163:36820).
Dec 13 09:13:05.169513 sshd[5381]: Accepted publickey for core from 147.75.109.163 port 36820 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:13:05.170555 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:13:05.188689 systemd-logind[1450]: New session 21 of user core.
Dec 13 09:13:05.196530 systemd[1]: Started session-21.scope - Session 21 of User core.
Dec 13 09:13:05.825766 sshd[5381]: pam_unix(sshd:session): session closed for user core
Dec 13 09:13:05.837298 systemd[1]: sshd@25-165.232.145.99:22-147.75.109.163:36820.service: Deactivated successfully.
Dec 13 09:13:05.851838 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 09:13:05.855188 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit.
Dec 13 09:13:05.858350 systemd-logind[1450]: Removed session 21.
Dec 13 09:13:09.724397 kubelet[2524]: E1213 09:13:09.723352    2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Dec 13 09:13:10.859535 systemd[1]: Started sshd@26-165.232.145.99:22-147.75.109.163:48116.service - OpenSSH per-connection server daemon (147.75.109.163:48116).
Dec 13 09:13:10.938690 sshd[5401]: Accepted publickey for core from 147.75.109.163 port 48116 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:13:10.941141 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:13:10.954010 systemd-logind[1450]: New session 22 of user core.
Dec 13 09:13:10.964323 systemd[1]: Started session-22.scope - Session 22 of User core.
Dec 13 09:13:11.234215 sshd[5401]: pam_unix(sshd:session): session closed for user core
Dec 13 09:13:11.243530 systemd[1]: sshd@26-165.232.145.99:22-147.75.109.163:48116.service: Deactivated successfully.
Dec 13 09:13:11.247651 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 09:13:11.250915 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit.
Dec 13 09:13:11.252845 systemd-logind[1450]: Removed session 22.
Dec 13 09:13:16.275835 systemd[1]: Started sshd@27-165.232.145.99:22-147.75.109.163:49896.service - OpenSSH per-connection server daemon (147.75.109.163:49896).
Dec 13 09:13:16.376688 sshd[5435]: Accepted publickey for core from 147.75.109.163 port 49896 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:13:16.386228 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:13:16.396405 systemd-logind[1450]: New session 23 of user core.
Dec 13 09:13:16.405441 systemd[1]: Started session-23.scope - Session 23 of User core.
Dec 13 09:13:16.643414 sshd[5435]: pam_unix(sshd:session): session closed for user core
Dec 13 09:13:16.658702 systemd[1]: sshd@27-165.232.145.99:22-147.75.109.163:49896.service: Deactivated successfully.
Dec 13 09:13:16.665336 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 09:13:16.667732 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit.
Dec 13 09:13:16.671667 systemd-logind[1450]: Removed session 23.
Dec 13 09:13:21.667778 systemd[1]: Started sshd@28-165.232.145.99:22-147.75.109.163:49912.service - OpenSSH per-connection server daemon (147.75.109.163:49912).
Dec 13 09:13:21.743201 sshd[5448]: Accepted publickey for core from 147.75.109.163 port 49912 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo
Dec 13 09:13:21.746057 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 09:13:21.753161 systemd-logind[1450]: New session 24 of user core.
Dec 13 09:13:21.760533 systemd[1]: Started session-24.scope - Session 24 of User core.
Dec 13 09:13:21.948923 sshd[5448]: pam_unix(sshd:session): session closed for user core
Dec 13 09:13:21.957996 systemd[1]: sshd@28-165.232.145.99:22-147.75.109.163:49912.service: Deactivated successfully.
Dec 13 09:13:21.961755 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 09:13:21.964959 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit.
Dec 13 09:13:21.968249 systemd-logind[1450]: Removed session 24.