Dec 13 01:57:27.959496 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:57:27.959523 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:57:27.959535 kernel: KASLR enabled Dec 13 01:57:27.959540 kernel: efi: EFI v2.7 by EDK II Dec 13 01:57:27.959546 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 01:57:27.959552 kernel: random: crng init done Dec 13 01:57:27.959559 kernel: ACPI: Early table checksum verification disabled Dec 13 01:57:27.959564 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 01:57:27.959571 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:57:27.959577 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959584 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959590 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959596 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959602 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959609 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959617 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959623 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959630 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:27.959636 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:57:27.959642 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 01:57:27.959649 kernel: NUMA: Failed to initialise from firmware Dec 13 01:57:27.959655 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 01:57:27.959662 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 01:57:27.959668 kernel: Zone ranges: Dec 13 01:57:27.959674 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:57:27.959680 kernel: DMA32 empty Dec 13 01:57:27.959688 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 01:57:27.959694 kernel: Movable zone start for each node Dec 13 01:57:27.959700 kernel: Early memory node ranges Dec 13 01:57:27.959707 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 01:57:27.959713 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 01:57:27.959719 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 01:57:27.959725 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 01:57:27.959732 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 01:57:27.959738 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 01:57:27.959744 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 01:57:27.959750 kernel: psci: probing for conduit method from ACPI. Dec 13 01:57:27.959758 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:57:27.959764 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:57:27.959770 kernel: psci: Trusted OS migration not required Dec 13 01:57:27.959779 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:57:27.959786 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:57:27.959793 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:57:27.959801 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:57:27.959808 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:57:27.959814 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:57:27.959821 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:57:27.959827 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:57:27.959834 kernel: CPU features: detected: Spectre-v4 Dec 13 01:57:27.959840 kernel: CPU features: detected: Spectre-BHB Dec 13 01:57:27.959847 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:57:27.959854 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:57:27.959860 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:57:27.959867 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:57:27.959875 kernel: alternatives: applying boot alternatives Dec 13 01:57:27.959883 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:57:27.959890 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:57:27.959897 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:57:27.959903 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:57:27.959910 kernel: Fallback order for Node 0: 0 Dec 13 01:57:27.959917 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 01:57:27.959923 kernel: Policy zone: Normal Dec 13 01:57:27.959930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:57:27.959937 kernel: software IO TLB: area num 2. Dec 13 01:57:27.959944 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 01:57:27.959952 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 01:57:27.959959 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:57:27.959965 kernel: trace event string verifier disabled Dec 13 01:57:27.959972 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:57:27.959980 kernel: rcu: RCU event tracing is enabled. Dec 13 01:57:27.959986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:57:27.959993 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:57:27.960000 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:57:27.960007 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:57:27.961065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:57:27.961091 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:57:27.961104 kernel: GICv3: 256 SPIs implemented Dec 13 01:57:27.961111 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:57:27.961118 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:57:27.961125 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:57:27.961131 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:57:27.961138 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:57:27.961145 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:57:27.961152 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:57:27.961159 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 01:57:27.961166 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 01:57:27.961173 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:57:27.961182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:57:27.961189 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:57:27.961196 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:57:27.961203 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:57:27.961210 kernel: Console: colour dummy device 80x25 Dec 13 01:57:27.961217 kernel: ACPI: Core revision 20230628 Dec 13 01:57:27.961224 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:57:27.961231 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:57:27.961238 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:57:27.961245 kernel: landlock: Up and running. Dec 13 01:57:27.961254 kernel: SELinux: Initializing. Dec 13 01:57:27.961261 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:27.961268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:27.961275 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:57:27.961282 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:57:27.961289 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:57:27.961296 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:57:27.961303 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:57:27.961310 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:57:27.961319 kernel: Remapping and enabling EFI services. Dec 13 01:57:27.961326 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:57:27.961333 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:57:27.961340 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:57:27.961348 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 01:57:27.961355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:57:27.961362 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:57:27.961368 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:57:27.961375 kernel: SMP: Total of 2 processors activated. Dec 13 01:57:27.961382 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:57:27.961391 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:57:27.961398 kernel: CPU features: detected: Common not Private translations Dec 13 01:57:27.961410 kernel: CPU features: detected: CRC32 instructions Dec 13 01:57:27.961419 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:57:27.961427 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:57:27.961434 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:57:27.961441 kernel: CPU features: detected: Privileged Access Never Dec 13 01:57:27.961449 kernel: CPU features: detected: RAS Extension Support Dec 13 01:57:27.961457 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:57:27.961466 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:57:27.961473 kernel: alternatives: applying system-wide alternatives Dec 13 01:57:27.961480 kernel: devtmpfs: initialized Dec 13 01:57:27.961488 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:57:27.961495 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:57:27.961502 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:57:27.961510 kernel: SMBIOS 3.0.0 present. Dec 13 01:57:27.961519 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 01:57:27.961526 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:57:27.961534 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:57:27.961541 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:57:27.961549 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:57:27.961556 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:57:27.961564 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Dec 13 01:57:27.961571 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:57:27.961579 kernel: cpuidle: using governor menu Dec 13 01:57:27.961587 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:57:27.961595 kernel: ASID allocator initialised with 32768 entries Dec 13 01:57:27.961602 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:57:27.961609 kernel: Serial: AMBA PL011 UART driver Dec 13 01:57:27.961617 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:57:27.961624 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:57:27.961631 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:57:27.961639 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:57:27.961646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:57:27.961655 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:57:27.961662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:57:27.961670 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:57:27.961677 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:57:27.961684 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:57:27.961692 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:57:27.961699 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:57:27.961707 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:57:27.961714 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:57:27.961723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:57:27.961730 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:57:27.961737 kernel: ACPI: Interpreter enabled Dec 13 01:57:27.961745 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:57:27.961752 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:57:27.961760 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:57:27.961767 kernel: printk: console [ttyAMA0] enabled Dec 13 01:57:27.961774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:57:27.961939 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:57:27.963329 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:57:27.963448 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:57:27.963514 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:57:27.963577 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:57:27.963587 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:57:27.963595 kernel: PCI host bridge to bus 0000:00 Dec 13 01:57:27.963670 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:57:27.963737 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:57:27.963796 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:57:27.963854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:57:27.963935 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:57:27.964029 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 01:57:27.964106 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 01:57:27.964178 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 01:57:27.964263 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.964331 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 01:57:27.964405 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.964472 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 01:57:27.964544 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.964610 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 01:57:27.964686 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.964760 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 01:57:27.964834 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.964900 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 01:57:27.964972 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.966442 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 01:57:27.966555 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.967074 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 01:57:27.967185 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.967259 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 01:57:27.967337 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:57:27.967408 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 01:57:27.967495 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 01:57:27.967563 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 01:57:27.967643 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:57:27.967715 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 01:57:27.967787 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:57:27.967858 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 01:57:27.967939 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:57:27.968012 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 01:57:27.969191 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:57:27.969264 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 01:57:27.969332 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 01:57:27.969407 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:57:27.969475 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 01:57:27.969576 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:57:27.969866 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 01:57:27.969967 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:57:27.970113 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 01:57:27.970197 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 01:57:27.970287 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:57:27.970366 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 01:57:27.970441 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 01:57:27.970514 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 01:57:27.970596 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 01:57:27.970674 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 01:57:27.970751 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 01:57:27.970826 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 01:57:27.970904 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 01:57:27.970973 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 01:57:27.971089 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 01:57:27.971169 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 01:57:27.971237 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 01:57:27.971313 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 01:57:27.971384 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 01:57:27.971461 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 01:57:27.971539 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 01:57:27.971607 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 01:57:27.971686 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 01:57:27.971761 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:57:27.971834 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 01:57:27.971906 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 01:57:27.971984 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:57:27.972113 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 01:57:27.972186 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 01:57:27.972263 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:57:27.972331 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 01:57:27.972399 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 01:57:27.972477 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:57:27.972547 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 01:57:27.972619 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 01:57:27.972690 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 01:57:27.972761 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 01:57:27.972832 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 01:57:27.972902 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 01:57:27.972974 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 01:57:27.973057 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 01:57:27.973134 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 01:57:27.973204 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 01:57:27.973276 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 01:57:27.973346 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 01:57:27.973418 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 01:57:27.973489 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 01:57:27.973562 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 01:57:27.973633 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 01:57:27.973705 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 01:57:27.973775 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 01:57:27.974559 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 01:57:27.974678 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 01:57:27.974751 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 01:57:27.974822 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 01:57:27.974918 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 01:57:27.974987 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:57:27.975202 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 01:57:27.975279 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:57:27.975350 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 01:57:27.975422 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:57:27.975872 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 01:57:27.975956 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 01:57:27.976058 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 01:57:27.976137 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 01:57:27.976205 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 01:57:27.976270 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 01:57:27.976339 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 01:57:27.976404 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 01:57:27.976472 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 01:57:27.976540 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 01:57:27.976613 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 01:57:27.976678 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 01:57:27.976755 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 01:57:27.976830 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 01:57:27.976902 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:57:27.976973 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 01:57:27.977059 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:57:27.977132 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 01:57:27.977205 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 01:57:27.977269 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 01:57:27.977341 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 01:57:27.977411 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:57:27.977482 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 01:57:27.977545 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 01:57:27.977609 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 01:57:27.977682 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 01:57:27.977758 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 01:57:27.977824 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:57:27.977888 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 01:57:27.977955 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 01:57:27.979381 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 01:57:27.979498 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 01:57:27.979568 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:57:27.979640 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 01:57:27.979704 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 01:57:27.979768 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 01:57:27.979840 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 01:57:27.979915 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:57:27.979980 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 01:57:27.981125 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 01:57:27.981214 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 01:57:27.981289 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 01:57:27.981369 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 01:57:27.981439 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:57:27.981506 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 01:57:27.981576 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 01:57:27.981649 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 01:57:27.981727 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 01:57:27.981796 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 01:57:27.981879 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 01:57:27.981955 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:57:27.982058 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 01:57:27.982189 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 01:57:27.982274 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 01:57:27.982346 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:57:27.982414 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 01:57:27.982479 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 01:57:27.982552 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 01:57:27.982621 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:57:27.982692 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 01:57:27.982756 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 01:57:27.982831 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 01:57:27.982901 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:57:27.982960 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:57:27.984544 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:57:27.984656 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 01:57:27.984728 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 01:57:27.984789 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 01:57:27.984865 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 01:57:27.984935 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 01:57:27.984995 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 01:57:27.985199 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 01:57:27.985266 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 01:57:27.985333 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 01:57:27.985407 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 01:57:27.985467 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 01:57:27.985537 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 01:57:27.985615 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 01:57:27.985679 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 01:57:27.985739 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 01:57:27.985806 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 01:57:27.985872 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 01:57:27.985943 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 01:57:27.986010 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 01:57:27.986132 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 01:57:27.986201 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 01:57:27.986269 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 01:57:27.986336 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 01:57:27.986395 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 01:57:27.986466 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 01:57:27.986534 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 01:57:27.986594 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 01:57:27.986606 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:57:27.986614 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:57:27.986622 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:57:27.986630 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:57:27.986638 kernel: iommu: Default domain type: Translated Dec 13 01:57:27.986647 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:57:27.986657 kernel: efivars: Registered efivars operations Dec 13 01:57:27.986666 kernel: vgaarb: loaded Dec 13 01:57:27.986675 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:57:27.986686 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:57:27.986696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:57:27.986703 kernel: pnp: PnP ACPI init Dec 13 01:57:27.986777 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:57:27.986789 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:57:27.986796 kernel: NET: Registered PF_INET protocol family Dec 13 01:57:27.986804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:57:27.986812 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:57:27.986822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:57:27.986830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:57:27.986838 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:57:27.986848 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:57:27.986856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:27.986863 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:27.986871 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:57:27.986946 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 01:57:27.986957 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:57:27.986967 kernel: kvm [1]: HYP mode not available Dec 13 01:57:27.986975 kernel: Initialise system trusted keyrings Dec 13 01:57:27.986983 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:57:27.986993 kernel: Key type asymmetric registered Dec 13 01:57:27.987002 kernel: Asymmetric key parser 'x509' registered Dec 13 01:57:27.987011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:57:27.987044 kernel: io scheduler mq-deadline registered Dec 13 01:57:27.987052 kernel: io scheduler kyber registered Dec 13 01:57:27.987060 kernel: io scheduler bfq registered Dec 13 01:57:27.987070 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:57:27.987146 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 01:57:27.987219 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 01:57:27.987285 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.987362 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 01:57:27.987432 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 01:57:27.987514 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.987600 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 01:57:27.987675 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 01:57:27.987753 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.987831 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 01:57:27.987912 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 01:57:27.987978 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.988067 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 01:57:27.988135 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 01:57:27.988207 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.988277 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 01:57:27.988343 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 01:57:27.988414 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.988485 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 01:57:27.988557 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 01:57:27.988621 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.988694 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 01:57:27.988761 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 01:57:27.988831 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.988844 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 01:57:27.988911 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 01:57:27.988986 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 01:57:27.989073 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 01:57:27.989085 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:57:27.989094 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:57:27.989102 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:57:27.989179 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 01:57:27.989262 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 01:57:27.989338 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 01:57:27.989350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:57:27.989359 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:57:27.989429 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 01:57:27.989443 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 01:57:27.989451 kernel: thunder_xcv, ver 1.0 Dec 13 01:57:27.989460 kernel: thunder_bgx, ver 1.0 Dec 13 01:57:27.989468 kernel: nicpf, ver 1.0 Dec 13 01:57:27.989475 kernel: nicvf, ver 1.0 Dec 13 01:57:27.989553 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:57:27.989616 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:57:27 UTC (1734055047) Dec 13 01:57:27.989627 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:57:27.989635 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:57:27.989643 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:57:27.989652 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:57:27.989662 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:57:27.989671 kernel: Segment Routing with IPv6 Dec 13 01:57:27.989681 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:57:27.989689 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:57:27.989697 kernel: Key type dns_resolver registered Dec 13 01:57:27.989705 kernel: registered taskstats version 1 Dec 13 01:57:27.989712 kernel: Loading compiled-in X.509 certificates Dec 13 01:57:27.989720 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:57:27.989729 kernel: Key type .fscrypt registered Dec 13 01:57:27.989737 kernel: Key type fscrypt-provisioning registered Dec 13 01:57:27.989745 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:57:27.989753 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:57:27.989761 kernel: ima: No architecture policies found Dec 13 01:57:27.989769 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:57:27.989777 kernel: clk: Disabling unused clocks Dec 13 01:57:27.989784 kernel: Freeing unused kernel memory: 39360K Dec 13 01:57:27.989792 kernel: Run /init as init process Dec 13 01:57:27.989801 kernel: with arguments: Dec 13 01:57:27.989809 kernel: /init Dec 13 01:57:27.989817 kernel: with environment: Dec 13 01:57:27.989824 kernel: HOME=/ Dec 13 01:57:27.989831 kernel: TERM=linux Dec 13 01:57:27.989839 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:57:27.989849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:57:27.989858 systemd[1]: Detected virtualization kvm. Dec 13 01:57:27.989868 systemd[1]: Detected architecture arm64. Dec 13 01:57:27.989876 systemd[1]: Running in initrd. Dec 13 01:57:27.989885 systemd[1]: No hostname configured, using default hostname. Dec 13 01:57:27.989892 systemd[1]: Hostname set to . Dec 13 01:57:27.989901 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:27.989909 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:57:27.989917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:57:27.989925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:57:27.989936 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:57:27.989944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:57:27.989953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:57:27.989961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:57:27.989971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:57:27.989979 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:57:27.989989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:57:27.989998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:57:27.990006 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:57:27.990070 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:57:27.990080 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:57:27.990088 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:57:27.990149 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:57:27.990158 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:57:27.990172 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:57:27.990182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:57:27.990190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:57:27.990198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:57:27.990206 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:57:27.990215 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:57:27.990223 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:57:27.990231 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:57:27.990239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:57:27.990250 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:57:27.990262 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:57:27.990271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:57:27.990279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:57:27.990287 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:57:27.990296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:57:27.990304 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:57:27.990353 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 01:57:27.990375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:57:27.990387 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:57:27.990396 systemd-journald[236]: Journal started Dec 13 01:57:27.990417 systemd-journald[236]: Runtime Journal (/run/log/journal/6270573a2c994c7dbe011540c0893275) is 8.0M, max 76.5M, 68.5M free. Dec 13 01:57:27.968324 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 01:57:27.991626 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:57:27.992775 kernel: Bridge firewalling registered Dec 13 01:57:27.992381 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 01:57:27.993664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:57:27.995152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:57:27.995862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:57:28.003263 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:57:28.005541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:57:28.008259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:57:28.012267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:57:28.023692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:57:28.034838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:57:28.036618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:57:28.038208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:57:28.047269 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:57:28.053281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:57:28.061040 dracut-cmdline[275]: dracut-dracut-053 Dec 13 01:57:28.064033 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:57:28.092576 systemd-resolved[279]: Positive Trust Anchors: Dec 13 01:57:28.092591 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:28.092624 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:57:28.102811 systemd-resolved[279]: Defaulting to hostname 'linux'. Dec 13 01:57:28.103853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:57:28.104509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:57:28.165113 kernel: SCSI subsystem initialized Dec 13 01:57:28.169049 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:57:28.177573 kernel: iscsi: registered transport (tcp) Dec 13 01:57:28.191178 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:57:28.191326 kernel: QLogic iSCSI HBA Driver Dec 13 01:57:28.241816 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:57:28.250343 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:57:28.273055 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:57:28.273134 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:57:28.274050 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:57:28.326167 kernel: raid6: neonx8 gen() 15770 MB/s Dec 13 01:57:28.343062 kernel: raid6: neonx4 gen() 15563 MB/s Dec 13 01:57:28.360061 kernel: raid6: neonx2 gen() 13253 MB/s Dec 13 01:57:28.377061 kernel: raid6: neonx1 gen() 10500 MB/s Dec 13 01:57:28.394073 kernel: raid6: int64x8 gen() 6968 MB/s Dec 13 01:57:28.411057 kernel: raid6: int64x4 gen() 7343 MB/s Dec 13 01:57:28.428064 kernel: raid6: int64x2 gen() 6133 MB/s Dec 13 01:57:28.445065 kernel: raid6: int64x1 gen() 5063 MB/s Dec 13 01:57:28.445141 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Dec 13 01:57:28.462064 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Dec 13 01:57:28.462183 kernel: raid6: using neon recovery algorithm Dec 13 01:57:28.467231 kernel: xor: measuring software checksum speed Dec 13 01:57:28.467297 kernel: 8regs : 16187 MB/sec Dec 13 01:57:28.467310 kernel: 32regs : 19707 MB/sec Dec 13 01:57:28.467321 kernel: arm64_neon : 26989 MB/sec Dec 13 01:57:28.467332 kernel: xor: using function: arm64_neon (26989 MB/sec) Dec 13 01:57:28.518116 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:57:28.534485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:57:28.543198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:57:28.583890 systemd-udevd[459]: Using default interface naming scheme 'v255'. Dec 13 01:57:28.587553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:57:28.596284 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:57:28.611953 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 13 01:57:28.644154 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:57:28.650215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:57:28.703176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:57:28.711535 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:57:28.732837 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:57:28.735715 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:57:28.737444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:57:28.738931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:57:28.744228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:57:28.759575 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:57:28.785708 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:57:28.837076 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:57:28.840049 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:57:28.870048 kernel: ACPI: bus type USB registered Dec 13 01:57:28.870166 kernel: usbcore: registered new interface driver usbfs Dec 13 01:57:28.870180 kernel: usbcore: registered new interface driver hub Dec 13 01:57:28.870190 kernel: usbcore: registered new device driver usb Dec 13 01:57:28.883179 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 01:57:28.893883 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 01:57:28.894010 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:57:28.894049 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:57:28.900200 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:57:28.900318 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:57:28.900402 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:57:28.900521 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:57:28.901205 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:57:28.901326 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:57:28.901423 kernel: hub 1-0:1.0: USB hub found Dec 13 01:57:28.901530 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:57:28.901613 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:57:28.901709 kernel: hub 2-0:1.0: USB hub found Dec 13 01:57:28.901801 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:57:28.889407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:57:28.889525 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:57:28.890949 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:57:28.892112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:57:28.892732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:57:28.895602 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:57:28.905783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:57:28.915803 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 01:57:28.927358 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:57:28.927508 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 01:57:28.927594 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 01:57:28.927676 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:57:28.927758 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:57:28.927769 kernel: GPT:17805311 != 80003071 Dec 13 01:57:28.927778 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:57:28.927795 kernel: GPT:17805311 != 80003071 Dec 13 01:57:28.927804 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:57:28.927813 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:57:28.927824 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 01:57:28.923851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:57:28.931303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:57:28.966863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:57:28.978306 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (517) Dec 13 01:57:28.980063 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (510) Dec 13 01:57:28.984793 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:57:28.991306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:57:28.997390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:57:28.997986 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:57:29.006284 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:57:29.014173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:57:29.021910 disk-uuid[575]: Primary Header is updated. Dec 13 01:57:29.021910 disk-uuid[575]: Secondary Entries is updated. Dec 13 01:57:29.021910 disk-uuid[575]: Secondary Header is updated. Dec 13 01:57:29.037091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:57:29.143062 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:57:29.382195 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 01:57:29.524305 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 01:57:29.524365 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:57:29.525694 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 01:57:29.579051 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 01:57:29.579369 kernel: usbcore: registered new interface driver usbhid Dec 13 01:57:29.579396 kernel: usbhid: USB HID core driver Dec 13 01:57:30.051442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:57:30.051512 disk-uuid[576]: The operation has completed successfully. Dec 13 01:57:30.109094 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:57:30.109203 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:57:30.125273 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:57:30.128766 sh[590]: Success Dec 13 01:57:30.145189 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:57:30.204837 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:57:30.206253 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:57:30.214185 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:57:30.236662 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:57:30.236728 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:57:30.236751 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:57:30.239626 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:57:30.239670 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:57:30.251057 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:57:30.253456 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:57:30.255249 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:57:30.265426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:57:30.268206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:57:30.283200 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:57:30.283277 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:57:30.283300 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:57:30.286281 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:57:30.286409 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:57:30.300419 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:57:30.301507 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:57:30.308314 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:57:30.317805 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:57:30.400041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:57:30.412705 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:57:30.439800 systemd-networkd[775]: lo: Link UP Dec 13 01:57:30.440445 systemd-networkd[775]: lo: Gained carrier Dec 13 01:57:30.442590 systemd-networkd[775]: Enumeration completed Dec 13 01:57:30.443223 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:57:30.445431 systemd[1]: Reached target network.target - Network. Dec 13 01:57:30.446273 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:30.446277 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:30.448584 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:30.448587 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:30.449142 systemd-networkd[775]: eth0: Link UP Dec 13 01:57:30.449145 systemd-networkd[775]: eth0: Gained carrier Dec 13 01:57:30.449155 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:30.454368 systemd-networkd[775]: eth1: Link UP Dec 13 01:57:30.454381 systemd-networkd[775]: eth1: Gained carrier Dec 13 01:57:30.454391 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:30.456275 ignition[672]: Ignition 2.19.0 Dec 13 01:57:30.456284 ignition[672]: Stage: fetch-offline Dec 13 01:57:30.456320 ignition[672]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:30.458538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:57:30.456328 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:30.456478 ignition[672]: parsed url from cmdline: "" Dec 13 01:57:30.456481 ignition[672]: no config URL provided Dec 13 01:57:30.456485 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:57:30.456491 ignition[672]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:57:30.456495 ignition[672]: failed to fetch config: resource requires networking Dec 13 01:57:30.456761 ignition[672]: Ignition finished successfully Dec 13 01:57:30.472342 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:57:30.487595 ignition[781]: Ignition 2.19.0 Dec 13 01:57:30.487605 ignition[781]: Stage: fetch Dec 13 01:57:30.487779 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:30.487789 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:30.487873 ignition[781]: parsed url from cmdline: "" Dec 13 01:57:30.487877 ignition[781]: no config URL provided Dec 13 01:57:30.487888 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:57:30.487895 ignition[781]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:57:30.490177 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:30.487913 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:57:30.488585 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:57:30.582144 systemd-networkd[775]: eth0: DHCPv4 address 168.119.247.250/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:57:30.688860 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:57:30.697461 ignition[781]: GET result: OK Dec 13 01:57:30.697566 ignition[781]: parsing config with SHA512: 169817db594f1c3a14345f3688c8261252c379bd74dcd9e55592947e6ce9095b58ab0d94002d98457a4c8608bbf3d24f3e1208b9a04b41099568ca04ac75fe40 Dec 13 01:57:30.702639 unknown[781]: fetched base config from "system" Dec 13 01:57:30.702652 unknown[781]: fetched base config from "system" Dec 13 01:57:30.702658 unknown[781]: fetched user config from "hetzner" Dec 13 01:57:30.703533 ignition[781]: fetch: fetch complete Dec 13 01:57:30.703540 ignition[781]: fetch: fetch passed Dec 13 01:57:30.703596 ignition[781]: Ignition finished successfully Dec 13 01:57:30.707083 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:57:30.713218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:57:30.729990 ignition[788]: Ignition 2.19.0 Dec 13 01:57:30.730005 ignition[788]: Stage: kargs Dec 13 01:57:30.730839 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:30.730850 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:30.731713 ignition[788]: kargs: kargs passed Dec 13 01:57:30.731762 ignition[788]: Ignition finished successfully Dec 13 01:57:30.734727 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:57:30.741225 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:57:30.757007 ignition[794]: Ignition 2.19.0 Dec 13 01:57:30.757033 ignition[794]: Stage: disks Dec 13 01:57:30.757240 ignition[794]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:30.757250 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:30.758302 ignition[794]: disks: disks passed Dec 13 01:57:30.760794 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:57:30.758383 ignition[794]: Ignition finished successfully Dec 13 01:57:30.762250 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:57:30.763224 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:57:30.763973 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:57:30.765301 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:57:30.767031 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:57:30.776270 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:57:30.795053 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:57:30.798889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:57:30.804181 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:57:30.860054 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:57:30.860704 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:57:30.861737 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:57:30.871157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:57:30.874197 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:57:30.877370 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:57:30.879778 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:57:30.880180 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:57:30.888047 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (810) Dec 13 01:57:30.891076 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:57:30.891146 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:57:30.891163 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:57:30.894633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:57:30.899985 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:57:30.900081 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:57:30.904312 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:57:30.905939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:57:30.966244 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:57:30.967331 coreos-metadata[812]: Dec 13 01:57:30.967 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:57:30.969125 coreos-metadata[812]: Dec 13 01:57:30.968 INFO Fetch successful Dec 13 01:57:30.970278 coreos-metadata[812]: Dec 13 01:57:30.969 INFO wrote hostname ci-4081-2-1-4-277531bf34 to /sysroot/etc/hostname Dec 13 01:57:30.972532 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:57:30.975066 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:57:30.981445 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:57:30.986351 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:57:31.099882 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:57:31.105192 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:57:31.108242 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:57:31.117042 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:57:31.142625 ignition[927]: INFO : Ignition 2.19.0 Dec 13 01:57:31.142625 ignition[927]: INFO : Stage: mount Dec 13 01:57:31.144713 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:31.144713 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:31.144713 ignition[927]: INFO : mount: mount passed Dec 13 01:57:31.144713 ignition[927]: INFO : Ignition finished successfully Dec 13 01:57:31.148129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:57:31.149432 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:57:31.153185 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:57:31.237462 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:57:31.249353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:57:31.258248 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (938) Dec 13 01:57:31.258317 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:57:31.261070 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:57:31.261122 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:57:31.265340 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:57:31.265422 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:57:31.268816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:57:31.296050 ignition[955]: INFO : Ignition 2.19.0 Dec 13 01:57:31.296050 ignition[955]: INFO : Stage: files Dec 13 01:57:31.296050 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:31.296050 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:31.298921 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:57:31.300472 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:57:31.300472 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:57:31.303785 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:57:31.305079 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:57:31.306726 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:57:31.305348 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 01:57:31.308824 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:57:31.308824 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:57:31.409221 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:57:31.601169 systemd-networkd[775]: eth1: Gained IPv6LL Dec 13 01:57:31.672780 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:57:31.672780 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:57:31.675248 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:57:32.242285 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:57:32.370406 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 01:57:32.479772 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:57:32.479772 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:32.483508 ignition[955]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:32.483508 ignition[955]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:32.483508 ignition[955]: INFO : files: files passed Dec 13 01:57:32.483508 ignition[955]: INFO : Ignition finished successfully Dec 13 01:57:32.485751 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:57:32.493623 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:57:32.497414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:57:32.500252 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:57:32.500355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:57:32.515062 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:32.515062 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:32.517373 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:32.519965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:57:32.520996 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:57:32.527204 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:57:32.566524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:57:32.566677 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:57:32.569512 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:57:32.571350 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:57:32.572489 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:57:32.574163 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:57:32.592392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:57:32.598256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:57:32.623853 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:57:32.625287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:57:32.626632 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:57:32.627663 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:57:32.627807 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:57:32.629083 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:57:32.629659 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:57:32.630810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:57:32.631917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:57:32.632955 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:57:32.634181 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:57:32.635276 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:57:32.636441 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:57:32.637449 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:57:32.638524 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:57:32.639425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:57:32.639552 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:57:32.640834 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:57:32.641440 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:57:32.642524 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:57:32.642597 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:57:32.643599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:57:32.643717 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:57:32.645199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:57:32.645308 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:57:32.646439 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:57:32.646528 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:57:32.647500 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:57:32.647595 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:57:32.657267 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:57:32.660232 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:57:32.660680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:57:32.660794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:57:32.661453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:57:32.661543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:57:32.676390 ignition[1007]: INFO : Ignition 2.19.0 Dec 13 01:57:32.676390 ignition[1007]: INFO : Stage: umount Dec 13 01:57:32.676390 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:32.676390 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:57:32.681224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:57:32.683198 ignition[1007]: INFO : umount: umount passed Dec 13 01:57:32.683198 ignition[1007]: INFO : Ignition finished successfully Dec 13 01:57:32.681326 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:57:32.687945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:57:32.688526 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:57:32.689806 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:57:32.690670 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:57:32.691337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:57:32.692391 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:57:32.692486 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:57:32.694458 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:57:32.694510 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:57:32.694963 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:57:32.694997 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:57:32.695564 systemd[1]: Stopped target network.target - Network. Dec 13 01:57:32.696295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:57:32.696342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:57:32.697152 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:57:32.697822 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:57:32.701098 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:57:32.702121 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:57:32.702961 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:57:32.704114 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:57:32.704171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:57:32.704917 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:57:32.704959 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:57:32.705780 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:57:32.705836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:57:32.706630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:57:32.706680 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:57:32.707587 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:57:32.707634 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:57:32.708722 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:57:32.709901 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:57:32.713098 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 01:57:32.717081 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:57:32.717217 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:57:32.718167 systemd-networkd[775]: eth1: DHCPv6 lease lost Dec 13 01:57:32.721200 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:57:32.723066 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:57:32.724957 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:57:32.725213 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:57:32.736511 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:57:32.736967 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:57:32.737049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:57:32.737654 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:32.737696 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:57:32.738225 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:57:32.738263 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:57:32.739603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:57:32.739639 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:57:32.742298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:57:32.761166 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:57:32.761289 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:57:32.763600 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:57:32.763742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:57:32.764886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:57:32.764923 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:57:32.765569 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:57:32.765596 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:57:32.766526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:57:32.766571 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:57:32.767802 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:57:32.767844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:57:32.769077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:57:32.769119 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:57:32.778579 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:57:32.779156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:57:32.779219 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:57:32.779817 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:57:32.779857 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:57:32.780491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:57:32.780528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:57:32.781599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:57:32.781636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:57:32.791370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:57:32.791529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:57:32.793483 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:57:32.800307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:57:32.811895 systemd[1]: Switching root. Dec 13 01:57:32.850825 systemd-journald[236]: Journal stopped Dec 13 01:57:33.755009 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 01:57:33.755086 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:57:33.755103 kernel: SELinux: policy capability open_perms=1 Dec 13 01:57:33.755113 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:57:33.755127 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:57:33.755141 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:57:33.755154 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:57:33.755168 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:57:33.755177 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:57:33.755187 kernel: audit: type=1403 audit(1734055053.002:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:57:33.755198 systemd[1]: Successfully loaded SELinux policy in 36.453ms. Dec 13 01:57:33.755219 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.204ms. Dec 13 01:57:33.755231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:57:33.755241 systemd[1]: Detected virtualization kvm. Dec 13 01:57:33.755253 systemd[1]: Detected architecture arm64. Dec 13 01:57:33.755264 systemd[1]: Detected first boot. Dec 13 01:57:33.755274 systemd[1]: Hostname set to . Dec 13 01:57:33.755284 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:33.755294 zram_generator::config[1049]: No configuration found. Dec 13 01:57:33.755306 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:57:33.755316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:57:33.755335 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:57:33.755347 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:57:33.755358 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:57:33.755369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:57:33.755379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:57:33.755389 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:57:33.755400 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:57:33.755410 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:57:33.755420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:57:33.755430 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:57:33.755443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:57:33.755453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:57:33.755464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:57:33.755475 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:57:33.755489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:57:33.755500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:57:33.755510 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:57:33.755520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:57:33.755531 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:57:33.755543 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:57:33.755554 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:57:33.755564 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:57:33.755575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:57:33.755585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:57:33.755596 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:57:33.755607 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:57:33.755619 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:57:33.755629 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:57:33.755639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:57:33.755650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:57:33.755661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:57:33.755671 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:57:33.755682 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:57:33.755692 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:57:33.755703 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:57:33.755713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:57:33.755724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:57:33.755734 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:57:33.755746 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:57:33.755756 systemd[1]: Reached target machines.target - Containers. Dec 13 01:57:33.755767 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:57:33.755777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:57:33.755789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:57:33.755804 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:57:33.755815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:57:33.755826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:57:33.755837 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:57:33.755847 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:57:33.755859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:57:33.755870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:57:33.755880 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:57:33.755891 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:57:33.755901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:57:33.755911 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:57:33.755922 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:57:33.755931 kernel: fuse: init (API version 7.39) Dec 13 01:57:33.755942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:57:33.755953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:57:33.755964 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:57:33.755974 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:57:33.755985 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:57:33.755996 systemd[1]: Stopped verity-setup.service. Dec 13 01:57:33.756006 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:57:33.756046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:57:33.756058 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:57:33.756069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:57:33.756081 kernel: loop: module loaded Dec 13 01:57:33.756091 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:57:33.756101 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:57:33.756112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:57:33.756122 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:57:33.756134 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:57:33.756145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:33.756164 kernel: ACPI: bus type drm_connector registered Dec 13 01:57:33.756174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:57:33.756185 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:33.756196 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:57:33.756208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:33.756219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:57:33.756229 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:57:33.756240 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:57:33.756251 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:33.756261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:57:33.756276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:57:33.756306 systemd-journald[1123]: Collecting audit messages is disabled. Dec 13 01:57:33.756332 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:57:33.756346 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:57:33.756358 systemd-journald[1123]: Journal started Dec 13 01:57:33.756380 systemd-journald[1123]: Runtime Journal (/run/log/journal/6270573a2c994c7dbe011540c0893275) is 8.0M, max 76.5M, 68.5M free. Dec 13 01:57:33.482757 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:57:33.506927 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:57:33.507553 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:57:33.758171 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:57:33.758973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:57:33.772645 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:57:33.781191 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:57:33.785746 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:57:33.786529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:57:33.786626 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:57:33.788303 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:57:33.802079 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:57:33.810362 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:57:33.812818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:57:33.820228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:57:33.828281 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:57:33.828851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:33.831306 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:57:33.832085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:57:33.834371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:57:33.839289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:57:33.843692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:57:33.845597 systemd-journald[1123]: Time spent on flushing to /var/log/journal/6270573a2c994c7dbe011540c0893275 is 70.268ms for 1122 entries. Dec 13 01:57:33.845597 systemd-journald[1123]: System Journal (/var/log/journal/6270573a2c994c7dbe011540c0893275) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:57:33.949439 systemd-journald[1123]: Received client request to flush runtime journal. Dec 13 01:57:33.949498 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:57:33.949512 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:57:33.846932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:57:33.850413 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:57:33.851298 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:57:33.854260 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:57:33.867467 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:57:33.910228 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:57:33.914627 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:57:33.921336 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:57:33.924571 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:57:33.955735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:57:33.961043 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Dec 13 01:57:33.961057 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Dec 13 01:57:33.961652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:57:33.967240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:57:33.973259 kernel: loop1: detected capacity change from 0 to 8 Dec 13 01:57:33.980874 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:57:33.985327 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:57:33.985987 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:57:34.004140 kernel: loop2: detected capacity change from 0 to 189592 Dec 13 01:57:34.046120 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:57:34.052042 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:57:34.053210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:57:34.085602 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Dec 13 01:57:34.085629 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Dec 13 01:57:34.097967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:57:34.100446 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:57:34.118050 kernel: loop5: detected capacity change from 0 to 8 Dec 13 01:57:34.123044 kernel: loop6: detected capacity change from 0 to 189592 Dec 13 01:57:34.148070 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:57:34.166461 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:57:34.168894 (sd-merge)[1191]: Merged extensions into '/usr'. Dec 13 01:57:34.176138 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:57:34.176263 systemd[1]: Reloading... Dec 13 01:57:34.283207 zram_generator::config[1225]: No configuration found. Dec 13 01:57:34.405890 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:57:34.431318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:34.476617 systemd[1]: Reloading finished in 299 ms. Dec 13 01:57:34.507173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:57:34.508632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:57:34.521434 systemd[1]: Starting ensure-sysext.service... Dec 13 01:57:34.524736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:57:34.538125 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:57:34.538145 systemd[1]: Reloading... Dec 13 01:57:34.551696 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:57:34.551955 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:57:34.552597 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:57:34.552805 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 13 01:57:34.552848 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 13 01:57:34.556192 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:57:34.556204 systemd-tmpfiles[1256]: Skipping /boot Dec 13 01:57:34.564810 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:57:34.564828 systemd-tmpfiles[1256]: Skipping /boot Dec 13 01:57:34.627051 zram_generator::config[1285]: No configuration found. Dec 13 01:57:34.722343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:34.768077 systemd[1]: Reloading finished in 229 ms. Dec 13 01:57:34.787625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:57:34.794589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:57:34.807307 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:57:34.812202 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:57:34.824309 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:57:34.830339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:57:34.841542 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:57:34.848818 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:57:34.854863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:57:34.857308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:57:34.863371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:57:34.869347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:57:34.870082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:57:34.873745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:57:34.873891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:57:34.875777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:57:34.879310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:57:34.880378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:57:34.883078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:57:34.888291 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:57:34.892187 systemd[1]: Finished ensure-sysext.service. Dec 13 01:57:34.901985 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:57:34.902961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:34.904250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:57:34.906744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:34.917353 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:57:34.929491 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:57:34.937129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:34.937301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:57:34.942138 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:34.943141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:57:34.944388 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:57:34.946561 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:34.946700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:57:34.947537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:57:34.953806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:57:34.953869 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:34.965090 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 13 01:57:34.989650 augenrules[1358]: No rules Dec 13 01:57:34.991949 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:57:34.998292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:57:35.012344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:57:35.012969 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:57:35.119226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:57:35.119919 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:57:35.137108 systemd-networkd[1368]: lo: Link UP Dec 13 01:57:35.137646 systemd-networkd[1368]: lo: Gained carrier Dec 13 01:57:35.138792 systemd-networkd[1368]: Enumeration completed Dec 13 01:57:35.140235 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:57:35.140460 systemd-timesyncd[1342]: No network connectivity, watching for changes. Dec 13 01:57:35.153023 systemd-resolved[1331]: Positive Trust Anchors: Dec 13 01:57:35.153088 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:35.153122 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:57:35.157431 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:57:35.162419 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Dec 13 01:57:35.162684 systemd-resolved[1331]: Using system hostname 'ci-4081-2-1-4-277531bf34'. Dec 13 01:57:35.167168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:57:35.167796 systemd[1]: Reached target network.target - Network. Dec 13 01:57:35.168419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:57:35.170986 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:57:35.178110 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1378) Dec 13 01:57:35.226618 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:35.227159 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:35.231280 systemd-networkd[1368]: eth0: Link UP Dec 13 01:57:35.231933 systemd-networkd[1368]: eth0: Gained carrier Dec 13 01:57:35.232047 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:35.243041 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:57:35.258277 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:35.258422 systemd-networkd[1368]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:35.259072 systemd-networkd[1368]: eth1: Link UP Dec 13 01:57:35.259171 systemd-networkd[1368]: eth1: Gained carrier Dec 13 01:57:35.259231 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:57:35.283049 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1373) Dec 13 01:57:35.293150 systemd-networkd[1368]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:35.293962 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Dec 13 01:57:35.334166 systemd-networkd[1368]: eth0: DHCPv4 address 168.119.247.250/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:57:35.334701 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Dec 13 01:57:35.335334 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Dec 13 01:57:35.344006 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:57:35.344322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:57:35.351219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:57:35.358311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:57:35.366208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:57:35.368208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:57:35.368257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:35.368756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:35.369324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:57:35.386322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:57:35.389248 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 01:57:35.389335 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:57:35.389355 kernel: [drm] features: -context_init Dec 13 01:57:35.390191 kernel: [drm] number of scanouts: 1 Dec 13 01:57:35.390249 kernel: [drm] number of cap sets: 0 Dec 13 01:57:35.398497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:57:35.403209 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:57:35.404693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:35.405304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:57:35.407736 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:35.407895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:57:35.408445 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:57:35.412488 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:57:35.425709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:35.425765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:57:35.430289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:57:35.438449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:57:35.497837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:57:35.560087 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:57:35.567343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:57:35.592089 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:35.613490 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:57:35.615923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:57:35.617279 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:57:35.618770 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:57:35.620304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:57:35.621879 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:57:35.622730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:57:35.623508 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:57:35.624209 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:57:35.624246 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:57:35.624761 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:57:35.627418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:57:35.629885 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:57:35.635999 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:57:35.637925 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:57:35.639157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:57:35.639748 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:57:35.640254 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:57:35.640742 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:57:35.640774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:57:35.648314 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:57:35.654318 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:57:35.658329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:57:35.663192 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:35.664211 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:57:35.673343 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:57:35.673852 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:57:35.676785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:57:35.693702 jq[1440]: false Dec 13 01:57:35.692064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:57:35.693775 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:57:35.697259 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:57:35.701844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:57:35.708256 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:57:35.709522 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:57:35.712036 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:57:35.713817 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:57:35.717181 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:57:35.723474 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:57:35.724374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:57:35.724518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:57:35.742770 dbus-daemon[1439]: [system] SELinux support is enabled Dec 13 01:57:35.761503 extend-filesystems[1443]: Found loop4 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found loop5 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found loop6 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found loop7 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda1 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda2 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda3 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found usr Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda4 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda6 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda7 Dec 13 01:57:35.761503 extend-filesystems[1443]: Found sda9 Dec 13 01:57:35.761503 extend-filesystems[1443]: Checking size of /dev/sda9 Dec 13 01:57:35.761405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:57:35.794405 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:57:35.794447 coreos-metadata[1438]: Dec 13 01:57:35.754 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:57:35.794447 coreos-metadata[1438]: Dec 13 01:57:35.758 INFO Fetch successful Dec 13 01:57:35.794447 coreos-metadata[1438]: Dec 13 01:57:35.758 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:57:35.794447 coreos-metadata[1438]: Dec 13 01:57:35.758 INFO Fetch successful Dec 13 01:57:35.798661 extend-filesystems[1443]: Resized partition /dev/sda9 Dec 13 01:57:35.777067 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:57:35.802787 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:57:35.777111 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:57:35.777784 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:57:35.777808 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:57:35.798831 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:57:35.820535 jq[1452]: true Dec 13 01:57:35.821975 tar[1454]: linux-arm64/helm Dec 13 01:57:35.799005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:57:35.803909 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:57:35.805099 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:57:35.817113 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:57:35.836043 jq[1477]: true Dec 13 01:57:35.865435 update_engine[1451]: I20241213 01:57:35.864248 1451 main.cc:92] Flatcar Update Engine starting Dec 13 01:57:35.890740 update_engine[1451]: I20241213 01:57:35.890542 1451 update_check_scheduler.cc:74] Next update check in 3m29s Dec 13 01:57:35.894100 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:57:35.913751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:57:35.915611 systemd-logind[1450]: New seat seat0. Dec 13 01:57:35.919141 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:57:35.919161 systemd-logind[1450]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 01:57:35.920120 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:57:35.929068 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1375) Dec 13 01:57:35.958412 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:57:35.972068 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:57:35.972953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:57:35.999544 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:57:35.999544 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:57:35.999544 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:57:36.005218 extend-filesystems[1443]: Resized filesystem in /dev/sda9 Dec 13 01:57:36.005218 extend-filesystems[1443]: Found sr0 Dec 13 01:57:36.000114 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:57:36.000308 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:57:36.038427 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:57:36.045715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:57:36.055575 systemd[1]: Starting sshkeys.service... Dec 13 01:57:36.070227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:57:36.090390 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:57:36.128855 coreos-metadata[1519]: Dec 13 01:57:36.128 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:57:36.130537 coreos-metadata[1519]: Dec 13 01:57:36.130 INFO Fetch successful Dec 13 01:57:36.133121 unknown[1519]: wrote ssh authorized keys file for user: core Dec 13 01:57:36.169111 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:57:36.170637 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:57:36.175959 systemd[1]: Finished sshkeys.service. Dec 13 01:57:36.273432 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:57:36.316056 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:57:36.361110 containerd[1476]: time="2024-12-13T01:57:36.359754480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:57:36.367064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:57:36.378829 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:57:36.398613 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:57:36.401120 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:57:36.411427 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:57:36.413691 containerd[1476]: time="2024-12-13T01:57:36.413644880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.416657 containerd[1476]: time="2024-12-13T01:57:36.416614760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:36.416756 containerd[1476]: time="2024-12-13T01:57:36.416742080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:57:36.416850 containerd[1476]: time="2024-12-13T01:57:36.416835640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:57:36.417112 containerd[1476]: time="2024-12-13T01:57:36.417093080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:57:36.417208 containerd[1476]: time="2024-12-13T01:57:36.417193640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417350 containerd[1476]: time="2024-12-13T01:57:36.417321360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417421 containerd[1476]: time="2024-12-13T01:57:36.417398960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417699 containerd[1476]: time="2024-12-13T01:57:36.417679720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417776 containerd[1476]: time="2024-12-13T01:57:36.417762080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417839 containerd[1476]: time="2024-12-13T01:57:36.417816680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:36.417883 containerd[1476]: time="2024-12-13T01:57:36.417872200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.418096 containerd[1476]: time="2024-12-13T01:57:36.418070840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.418413 containerd[1476]: time="2024-12-13T01:57:36.418386560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:36.418620 containerd[1476]: time="2024-12-13T01:57:36.418600720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:36.418687 containerd[1476]: time="2024-12-13T01:57:36.418674360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:57:36.418847 containerd[1476]: time="2024-12-13T01:57:36.418814160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:57:36.418976 containerd[1476]: time="2024-12-13T01:57:36.418959480Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:57:36.424815 containerd[1476]: time="2024-12-13T01:57:36.424775520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:57:36.424987 containerd[1476]: time="2024-12-13T01:57:36.424973720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:57:36.425129 containerd[1476]: time="2024-12-13T01:57:36.425115280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:57:36.425201 containerd[1476]: time="2024-12-13T01:57:36.425188600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:57:36.425266 containerd[1476]: time="2024-12-13T01:57:36.425254000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:57:36.425473 containerd[1476]: time="2024-12-13T01:57:36.425454320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:57:36.425807 containerd[1476]: time="2024-12-13T01:57:36.425786840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:57:36.425986 containerd[1476]: time="2024-12-13T01:57:36.425968440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:57:36.426115 containerd[1476]: time="2024-12-13T01:57:36.426096640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:57:36.426173 containerd[1476]: time="2024-12-13T01:57:36.426161280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:57:36.426226 containerd[1476]: time="2024-12-13T01:57:36.426214560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426288 containerd[1476]: time="2024-12-13T01:57:36.426276080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426348 containerd[1476]: time="2024-12-13T01:57:36.426335680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426408 containerd[1476]: time="2024-12-13T01:57:36.426396320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426488 containerd[1476]: time="2024-12-13T01:57:36.426475040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426545 containerd[1476]: time="2024-12-13T01:57:36.426533760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426600 containerd[1476]: time="2024-12-13T01:57:36.426586560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426651 containerd[1476]: time="2024-12-13T01:57:36.426636560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:57:36.426726 containerd[1476]: time="2024-12-13T01:57:36.426712560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426770040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426787960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426808480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426821880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426838120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426852520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426866880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426892160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426910760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426924640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426937120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426952240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.426967680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.427004440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427266 containerd[1476]: time="2024-12-13T01:57:36.427042280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.427546 containerd[1476]: time="2024-12-13T01:57:36.427056560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:57:36.427750 containerd[1476]: time="2024-12-13T01:57:36.427732880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:57:36.427819 containerd[1476]: time="2024-12-13T01:57:36.427800360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:57:36.427878 containerd[1476]: time="2024-12-13T01:57:36.427865920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:57:36.427934 containerd[1476]: time="2024-12-13T01:57:36.427921600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:57:36.427985 containerd[1476]: time="2024-12-13T01:57:36.427973760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.428055 containerd[1476]: time="2024-12-13T01:57:36.428040680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:57:36.428906 containerd[1476]: time="2024-12-13T01:57:36.428086200Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:57:36.428906 containerd[1476]: time="2024-12-13T01:57:36.428104720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:57:36.428952 containerd[1476]: time="2024-12-13T01:57:36.428469760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:57:36.428952 containerd[1476]: time="2024-12-13T01:57:36.428535960Z" level=info msg="Connect containerd service" Dec 13 01:57:36.428952 containerd[1476]: time="2024-12-13T01:57:36.428573920Z" level=info msg="using legacy CRI server" Dec 13 01:57:36.428952 containerd[1476]: time="2024-12-13T01:57:36.428580600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:57:36.428952 containerd[1476]: time="2024-12-13T01:57:36.428669840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:57:36.435147 containerd[1476]: time="2024-12-13T01:57:36.435106680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:57:36.435915 containerd[1476]: time="2024-12-13T01:57:36.435887400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:57:36.436075 containerd[1476]: time="2024-12-13T01:57:36.436057240Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:57:36.436288 containerd[1476]: time="2024-12-13T01:57:36.436259040Z" level=info msg="Start subscribing containerd event" Dec 13 01:57:36.437349 containerd[1476]: time="2024-12-13T01:57:36.436418200Z" level=info msg="Start recovering state" Dec 13 01:57:36.437349 containerd[1476]: time="2024-12-13T01:57:36.436496400Z" level=info msg="Start event monitor" Dec 13 01:57:36.437349 containerd[1476]: time="2024-12-13T01:57:36.436508840Z" level=info msg="Start snapshots syncer" Dec 13 01:57:36.437349 containerd[1476]: time="2024-12-13T01:57:36.436518080Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:57:36.437349 containerd[1476]: time="2024-12-13T01:57:36.436525640Z" level=info msg="Start streaming server" Dec 13 01:57:36.436783 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:57:36.439257 containerd[1476]: time="2024-12-13T01:57:36.438411000Z" level=info msg="containerd successfully booted in 0.081575s" Dec 13 01:57:36.442152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:57:36.453608 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:57:36.462471 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:57:36.463229 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:57:36.519076 tar[1454]: linux-arm64/LICENSE Dec 13 01:57:36.519076 tar[1454]: linux-arm64/README.md Dec 13 01:57:36.530091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:57:36.721343 systemd-networkd[1368]: eth0: Gained IPv6LL Dec 13 01:57:36.722012 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Dec 13 01:57:36.723604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:57:36.725712 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:57:36.739407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:57:36.742446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:57:36.778930 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:57:36.785206 systemd-networkd[1368]: eth1: Gained IPv6LL Dec 13 01:57:36.786539 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Dec 13 01:57:37.384962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:57:37.386442 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:57:37.389235 systemd[1]: Startup finished in 783ms (kernel) + 5.286s (initrd) + 4.421s (userspace) = 10.492s. Dec 13 01:57:37.396731 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:57:37.933713 kubelet[1569]: E1213 01:57:37.933662 1569 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:37.936790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:37.936966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:48.187827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:57:48.194396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:57:48.307267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:57:48.318417 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:57:48.368540 kubelet[1588]: E1213 01:57:48.368406 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:48.374424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:48.374659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:58.557655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:57:58.564336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:57:58.680375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:57:58.693570 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:57:58.740713 kubelet[1603]: E1213 01:57:58.740626 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:58.744365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:58.744721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:06.908883 systemd-timesyncd[1342]: Contacted time server 213.209.109.45:123 (2.flatcar.pool.ntp.org). Dec 13 01:58:06.908994 systemd-timesyncd[1342]: Initial clock synchronization to Fri 2024-12-13 01:58:07.082898 UTC. Dec 13 01:58:08.808269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:58:08.815430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:58:08.924380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:58:08.929293 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:58:08.975202 kubelet[1619]: E1213 01:58:08.975123 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:08.977790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:08.977940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:19.057882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:58:19.075337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:58:19.173140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:58:19.177779 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:58:19.223878 kubelet[1635]: E1213 01:58:19.223805 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:19.226539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:19.226699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:20.942393 update_engine[1451]: I20241213 01:58:20.942234 1451 update_attempter.cc:509] Updating boot flags... Dec 13 01:58:20.988042 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1651) Dec 13 01:58:21.066053 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1653) Dec 13 01:58:29.307587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:58:29.315304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:58:29.435432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:58:29.450734 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:58:29.493666 kubelet[1668]: E1213 01:58:29.493544 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:29.495982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:29.496163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:39.557902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:58:39.567443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:58:39.696976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:58:39.711821 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:58:39.748098 kubelet[1683]: E1213 01:58:39.747990 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:39.751121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:39.751444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:49.807597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:58:49.815524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:58:49.932739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:58:49.939832 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:58:49.983141 kubelet[1696]: E1213 01:58:49.983042 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:49.986535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:49.986802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:00.057359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:59:00.067332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:00.191232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:00.196214 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:00.234910 kubelet[1712]: E1213 01:59:00.234847 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:00.237452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:00.237629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:10.307478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:59:10.314414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:10.444379 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:10.445054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:10.489472 kubelet[1727]: E1213 01:59:10.489370 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:10.493383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:10.493622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:20.558151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:59:20.573530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:20.697107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:20.703837 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:20.741549 kubelet[1740]: E1213 01:59:20.741447 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:20.744164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:20.744336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:25.101681 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:59:25.110316 systemd[1]: Started sshd@0-168.119.247.250:22-147.75.109.163:48402.service - OpenSSH per-connection server daemon (147.75.109.163:48402). Dec 13 01:59:26.141219 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 48402 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:26.144601 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:26.163092 systemd-logind[1450]: New session 1 of user core. Dec 13 01:59:26.164142 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:59:26.178457 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:59:26.200163 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:59:26.209389 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:59:26.213336 (systemd)[1753]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:26.326365 systemd[1753]: Queued start job for default target default.target. Dec 13 01:59:26.337676 systemd[1753]: Created slice app.slice - User Application Slice. Dec 13 01:59:26.337727 systemd[1753]: Reached target paths.target - Paths. Dec 13 01:59:26.337750 systemd[1753]: Reached target timers.target - Timers. Dec 13 01:59:26.339744 systemd[1753]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:59:26.355694 systemd[1753]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:59:26.355842 systemd[1753]: Reached target sockets.target - Sockets. Dec 13 01:59:26.355861 systemd[1753]: Reached target basic.target - Basic System. Dec 13 01:59:26.355926 systemd[1753]: Reached target default.target - Main User Target. Dec 13 01:59:26.355964 systemd[1753]: Startup finished in 136ms. Dec 13 01:59:26.356237 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:59:26.363325 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:59:27.067848 systemd[1]: Started sshd@1-168.119.247.250:22-147.75.109.163:49666.service - OpenSSH per-connection server daemon (147.75.109.163:49666). Dec 13 01:59:28.059867 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 49666 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:28.062228 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:28.070957 systemd-logind[1450]: New session 2 of user core. Dec 13 01:59:28.077300 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:59:28.753722 sshd[1764]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:28.759591 systemd[1]: sshd@1-168.119.247.250:22-147.75.109.163:49666.service: Deactivated successfully. Dec 13 01:59:28.761558 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:59:28.763647 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:59:28.764971 systemd-logind[1450]: Removed session 2. Dec 13 01:59:28.932425 systemd[1]: Started sshd@2-168.119.247.250:22-147.75.109.163:49672.service - OpenSSH per-connection server daemon (147.75.109.163:49672). Dec 13 01:59:29.911974 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 49672 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:29.914092 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:29.920554 systemd-logind[1450]: New session 3 of user core. Dec 13 01:59:29.926212 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:59:30.590136 sshd[1771]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:30.595863 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:59:30.596187 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:59:30.597450 systemd[1]: sshd@2-168.119.247.250:22-147.75.109.163:49672.service: Deactivated successfully. Dec 13 01:59:30.600602 systemd-logind[1450]: Removed session 3. Dec 13 01:59:30.759766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:59:30.770315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:30.772454 systemd[1]: Started sshd@3-168.119.247.250:22-147.75.109.163:49686.service - OpenSSH per-connection server daemon (147.75.109.163:49686). Dec 13 01:59:30.883961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:30.888463 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:30.924042 kubelet[1788]: E1213 01:59:30.923973 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:30.926654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:30.926835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:31.762191 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 49686 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:31.764300 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:31.771327 systemd-logind[1450]: New session 4 of user core. Dec 13 01:59:31.784387 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:59:32.450484 sshd[1779]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:32.455285 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:59:32.456101 systemd[1]: sshd@3-168.119.247.250:22-147.75.109.163:49686.service: Deactivated successfully. Dec 13 01:59:32.457872 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:59:32.459922 systemd-logind[1450]: Removed session 4. Dec 13 01:59:32.622560 systemd[1]: Started sshd@4-168.119.247.250:22-147.75.109.163:49700.service - OpenSSH per-connection server daemon (147.75.109.163:49700). Dec 13 01:59:33.605264 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 49700 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:33.607441 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:33.613113 systemd-logind[1450]: New session 5 of user core. Dec 13 01:59:33.624377 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:59:34.148558 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:59:34.148860 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:59:34.167254 sudo[1803]: pam_unix(sudo:session): session closed for user root Dec 13 01:59:34.326470 sshd[1800]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:34.333179 systemd[1]: sshd@4-168.119.247.250:22-147.75.109.163:49700.service: Deactivated successfully. Dec 13 01:59:34.335278 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:59:34.336553 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:59:34.337961 systemd-logind[1450]: Removed session 5. Dec 13 01:59:34.504467 systemd[1]: Started sshd@5-168.119.247.250:22-147.75.109.163:49708.service - OpenSSH per-connection server daemon (147.75.109.163:49708). Dec 13 01:59:35.475413 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 49708 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:35.477505 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:35.483683 systemd-logind[1450]: New session 6 of user core. Dec 13 01:59:35.492773 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:59:35.997597 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:59:35.998110 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:59:36.001803 sudo[1812]: pam_unix(sudo:session): session closed for user root Dec 13 01:59:36.008216 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:59:36.008576 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:59:36.030365 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:59:36.032363 auditctl[1815]: No rules Dec 13 01:59:36.032848 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:59:36.033115 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:59:36.037991 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:59:36.075377 augenrules[1833]: No rules Dec 13 01:59:36.076594 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:59:36.079528 sudo[1811]: pam_unix(sudo:session): session closed for user root Dec 13 01:59:36.238884 sshd[1808]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:36.247012 systemd[1]: sshd@5-168.119.247.250:22-147.75.109.163:49708.service: Deactivated successfully. Dec 13 01:59:36.251405 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:59:36.253099 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:59:36.254444 systemd-logind[1450]: Removed session 6. Dec 13 01:59:36.413180 systemd[1]: Started sshd@6-168.119.247.250:22-147.75.109.163:33816.service - OpenSSH per-connection server daemon (147.75.109.163:33816). Dec 13 01:59:37.414805 sshd[1841]: Accepted publickey for core from 147.75.109.163 port 33816 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 01:59:37.416885 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:59:37.423115 systemd-logind[1450]: New session 7 of user core. Dec 13 01:59:37.431235 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:59:37.942581 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:59:37.942945 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:59:38.287551 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:59:38.288010 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:59:38.571863 dockerd[1859]: time="2024-12-13T01:59:38.571587636Z" level=info msg="Starting up" Dec 13 01:59:38.677109 dockerd[1859]: time="2024-12-13T01:59:38.677061947Z" level=info msg="Loading containers: start." Dec 13 01:59:38.804069 kernel: Initializing XFRM netlink socket Dec 13 01:59:38.884834 systemd-networkd[1368]: docker0: Link UP Dec 13 01:59:38.903164 dockerd[1859]: time="2024-12-13T01:59:38.902997468Z" level=info msg="Loading containers: done." Dec 13 01:59:38.927228 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2394112033-merged.mount: Deactivated successfully. Dec 13 01:59:38.936359 dockerd[1859]: time="2024-12-13T01:59:38.936308042Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:59:38.936537 dockerd[1859]: time="2024-12-13T01:59:38.936428367Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:59:38.936563 dockerd[1859]: time="2024-12-13T01:59:38.936548092Z" level=info msg="Daemon has completed initialization" Dec 13 01:59:38.977435 dockerd[1859]: time="2024-12-13T01:59:38.977259011Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:59:38.977704 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:59:40.018178 containerd[1476]: time="2024-12-13T01:59:40.018126646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:59:40.650574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475537624.mount: Deactivated successfully. Dec 13 01:59:41.057434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:59:41.063354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:41.219737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:41.223957 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:41.263952 kubelet[2055]: E1213 01:59:41.263869 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:41.266940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:41.267168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:42.434065 containerd[1476]: time="2024-12-13T01:59:42.433911046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:42.435521 containerd[1476]: time="2024-12-13T01:59:42.435283620Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615677" Dec 13 01:59:42.436476 containerd[1476]: time="2024-12-13T01:59:42.436417786Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:42.440186 containerd[1476]: time="2024-12-13T01:59:42.440130373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:42.441818 containerd[1476]: time="2024-12-13T01:59:42.441557910Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.423381341s" Dec 13 01:59:42.441818 containerd[1476]: time="2024-12-13T01:59:42.441603032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 01:59:42.442762 containerd[1476]: time="2024-12-13T01:59:42.442509228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:59:44.694029 containerd[1476]: time="2024-12-13T01:59:44.693966470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:44.696033 containerd[1476]: time="2024-12-13T01:59:44.695948509Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470116" Dec 13 01:59:44.697280 containerd[1476]: time="2024-12-13T01:59:44.697190578Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:44.700774 containerd[1476]: time="2024-12-13T01:59:44.700695714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:44.702632 containerd[1476]: time="2024-12-13T01:59:44.702478400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.259932052s" Dec 13 01:59:44.702632 containerd[1476]: time="2024-12-13T01:59:44.702519119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 01:59:44.703260 containerd[1476]: time="2024-12-13T01:59:44.703087575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:59:46.302450 containerd[1476]: time="2024-12-13T01:59:46.302383671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:46.303999 containerd[1476]: time="2024-12-13T01:59:46.303958853Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024222" Dec 13 01:59:46.304838 containerd[1476]: time="2024-12-13T01:59:46.304491473Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:46.307875 containerd[1476]: time="2024-12-13T01:59:46.307799391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:46.309473 containerd[1476]: time="2024-12-13T01:59:46.309303095Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.606181041s" Dec 13 01:59:46.309473 containerd[1476]: time="2024-12-13T01:59:46.309353653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 01:59:46.310452 containerd[1476]: time="2024-12-13T01:59:46.310165063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:59:47.284139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894498852.mount: Deactivated successfully. Dec 13 01:59:47.967674 containerd[1476]: time="2024-12-13T01:59:47.967603191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:47.968707 containerd[1476]: time="2024-12-13T01:59:47.968621475Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452" Dec 13 01:59:47.969713 containerd[1476]: time="2024-12-13T01:59:47.969621240Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:47.972008 containerd[1476]: time="2024-12-13T01:59:47.971929759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:47.972846 containerd[1476]: time="2024-12-13T01:59:47.972592656Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.662387995s" Dec 13 01:59:47.972846 containerd[1476]: time="2024-12-13T01:59:47.972632175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:59:47.973580 containerd[1476]: time="2024-12-13T01:59:47.973066160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:59:48.532644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555268043.mount: Deactivated successfully. Dec 13 01:59:49.125636 containerd[1476]: time="2024-12-13T01:59:49.125573608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.126950 containerd[1476]: time="2024-12-13T01:59:49.126909646Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 01:59:49.128392 containerd[1476]: time="2024-12-13T01:59:49.128325642Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.131995 containerd[1476]: time="2024-12-13T01:59:49.131911490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.133892 containerd[1476]: time="2024-12-13T01:59:49.133702474Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.160608396s" Dec 13 01:59:49.133892 containerd[1476]: time="2024-12-13T01:59:49.133744233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:59:49.134548 containerd[1476]: time="2024-12-13T01:59:49.134524449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:59:49.739351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208066529.mount: Deactivated successfully. Dec 13 01:59:49.748663 containerd[1476]: time="2024-12-13T01:59:49.748589613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.749398 containerd[1476]: time="2024-12-13T01:59:49.749358669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Dec 13 01:59:49.750406 containerd[1476]: time="2024-12-13T01:59:49.750295119Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.753402 containerd[1476]: time="2024-12-13T01:59:49.753338265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:49.754053 containerd[1476]: time="2024-12-13T01:59:49.753920686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 619.278362ms" Dec 13 01:59:49.754053 containerd[1476]: time="2024-12-13T01:59:49.754001364Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 01:59:49.754624 containerd[1476]: time="2024-12-13T01:59:49.754502348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:59:50.422596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570846077.mount: Deactivated successfully. Dec 13 01:59:51.307577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:59:51.317357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:51.442063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:51.453470 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:59:51.506647 kubelet[2187]: E1213 01:59:51.506583 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:51.509693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:51.509943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:59:53.819584 containerd[1476]: time="2024-12-13T01:59:53.817529743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:53.821072 containerd[1476]: time="2024-12-13T01:59:53.820957539Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Dec 13 01:59:53.823344 containerd[1476]: time="2024-12-13T01:59:53.823276163Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:53.828175 containerd[1476]: time="2024-12-13T01:59:53.828108125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:59:53.830573 containerd[1476]: time="2024-12-13T01:59:53.829996079Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.075455132s" Dec 13 01:59:53.830573 containerd[1476]: time="2024-12-13T01:59:53.830077797Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 01:59:59.291162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:59.304421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:59.333219 systemd[1]: Reloading requested from client PID 2226 ('systemctl') (unit session-7.scope)... Dec 13 01:59:59.333233 systemd[1]: Reloading... Dec 13 01:59:59.456052 zram_generator::config[2272]: No configuration found. Dec 13 01:59:59.532429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:59:59.598419 systemd[1]: Reloading finished in 264 ms. Dec 13 01:59:59.645442 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:59:59.645533 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:59:59.645856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:59.652467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:59:59.756189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:59:59.767926 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:59:59.820309 kubelet[2314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:59:59.822063 kubelet[2314]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:59:59.822063 kubelet[2314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:59:59.822063 kubelet[2314]: I1213 01:59:59.820831 2314 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:00:00.682080 kubelet[2314]: I1213 02:00:00.681547 2314 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:00:00.682080 kubelet[2314]: I1213 02:00:00.681582 2314 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:00:00.682264 kubelet[2314]: I1213 02:00:00.682098 2314 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:00:00.721287 kubelet[2314]: E1213 02:00:00.721234 2314 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://168.119.247.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:00.721889 kubelet[2314]: I1213 02:00:00.721641 2314 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:00:00.730089 kubelet[2314]: E1213 02:00:00.729054 2314 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:00:00.730089 kubelet[2314]: I1213 02:00:00.729095 2314 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:00:00.736944 kubelet[2314]: I1213 02:00:00.736902 2314 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:00:00.739197 kubelet[2314]: I1213 02:00:00.738153 2314 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:00:00.739197 kubelet[2314]: I1213 02:00:00.738324 2314 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:00:00.739197 kubelet[2314]: I1213 02:00:00.738358 2314 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4-277531bf34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:00:00.739197 kubelet[2314]: I1213 02:00:00.738557 2314 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:00:00.739433 kubelet[2314]: I1213 02:00:00.738568 2314 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:00:00.739433 kubelet[2314]: I1213 02:00:00.738765 2314 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:00.745927 kubelet[2314]: I1213 02:00:00.745402 2314 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:00:00.745927 kubelet[2314]: I1213 02:00:00.745447 2314 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:00:00.745927 kubelet[2314]: I1213 02:00:00.745476 2314 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:00:00.745927 kubelet[2314]: I1213 02:00:00.745497 2314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:00:00.751643 kubelet[2314]: W1213 02:00:00.751577 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.247.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-277531bf34&limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:00.751761 kubelet[2314]: E1213 02:00:00.751674 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.247.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-277531bf34&limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:00.752815 kubelet[2314]: W1213 02:00:00.752249 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.247.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:00.752815 kubelet[2314]: E1213 02:00:00.752317 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.247.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:00.752815 kubelet[2314]: I1213 02:00:00.752515 2314 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:00:00.755199 kubelet[2314]: I1213 02:00:00.755174 2314 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:00:00.756425 kubelet[2314]: W1213 02:00:00.756196 2314 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:00:00.757206 kubelet[2314]: I1213 02:00:00.757189 2314 server.go:1269] "Started kubelet" Dec 13 02:00:00.758238 kubelet[2314]: I1213 02:00:00.758202 2314 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:00:00.762635 kubelet[2314]: I1213 02:00:00.762565 2314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:00:00.762953 kubelet[2314]: I1213 02:00:00.762931 2314 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:00:00.763230 kubelet[2314]: I1213 02:00:00.763214 2314 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:00:00.769298 kubelet[2314]: I1213 02:00:00.769273 2314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:00:00.770295 kubelet[2314]: E1213 02:00:00.767725 2314 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.247.250:6443/api/v1/namespaces/default/events\": dial tcp 168.119.247.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-4-277531bf34.18109a04860ca1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-277531bf34,UID:ci-4081-2-1-4-277531bf34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-277531bf34,},FirstTimestamp:2024-12-13 02:00:00.757162419 +0000 UTC m=+0.982538697,LastTimestamp:2024-12-13 02:00:00.757162419 +0000 UTC m=+0.982538697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-277531bf34,}" Dec 13 02:00:00.771161 kubelet[2314]: I1213 02:00:00.771139 2314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:00:00.774328 kubelet[2314]: I1213 02:00:00.774304 2314 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:00:00.774842 kubelet[2314]: E1213 02:00:00.774820 2314 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-4-277531bf34\" not found" Dec 13 02:00:00.775550 kubelet[2314]: I1213 02:00:00.775529 2314 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:00:00.775688 kubelet[2314]: I1213 02:00:00.775678 2314 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:00:00.776628 kubelet[2314]: W1213 02:00:00.776572 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.247.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:00.776762 kubelet[2314]: E1213 02:00:00.776741 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.247.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:00.777401 kubelet[2314]: E1213 02:00:00.776897 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.247.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-277531bf34?timeout=10s\": dial tcp 168.119.247.250:6443: connect: connection refused" interval="200ms" Dec 13 02:00:00.780720 kubelet[2314]: I1213 02:00:00.780692 2314 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:00:00.783035 kubelet[2314]: I1213 02:00:00.780946 2314 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:00:00.783035 kubelet[2314]: I1213 02:00:00.781068 2314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:00:00.794742 kubelet[2314]: I1213 02:00:00.794672 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:00:00.795803 kubelet[2314]: I1213 02:00:00.795771 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:00:00.795803 kubelet[2314]: I1213 02:00:00.795798 2314 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:00:00.795900 kubelet[2314]: I1213 02:00:00.795823 2314 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:00:00.795900 kubelet[2314]: E1213 02:00:00.795870 2314 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:00:00.809117 kubelet[2314]: E1213 02:00:00.808967 2314 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:00:00.809521 kubelet[2314]: W1213 02:00:00.809414 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.247.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:00.809568 kubelet[2314]: E1213 02:00:00.809515 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.247.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:00.815468 kubelet[2314]: I1213 02:00:00.815441 2314 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:00:00.815468 kubelet[2314]: I1213 02:00:00.815459 2314 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:00:00.815468 kubelet[2314]: I1213 02:00:00.815478 2314 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:00.817632 kubelet[2314]: I1213 02:00:00.817593 2314 policy_none.go:49] "None policy: Start" Dec 13 02:00:00.818299 kubelet[2314]: I1213 02:00:00.818282 2314 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:00:00.818426 kubelet[2314]: I1213 02:00:00.818416 2314 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:00:00.823118 kubelet[2314]: E1213 02:00:00.822998 2314 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.247.250:6443/api/v1/namespaces/default/events\": dial tcp 168.119.247.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-4-277531bf34.18109a04860ca1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-277531bf34,UID:ci-4081-2-1-4-277531bf34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-277531bf34,},FirstTimestamp:2024-12-13 02:00:00.757162419 +0000 UTC m=+0.982538697,LastTimestamp:2024-12-13 02:00:00.757162419 +0000 UTC m=+0.982538697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-277531bf34,}" Dec 13 02:00:00.826159 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:00:00.837814 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:00:00.854493 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:00:00.856793 kubelet[2314]: I1213 02:00:00.856734 2314 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:00:00.857073 kubelet[2314]: I1213 02:00:00.856994 2314 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:00:00.857073 kubelet[2314]: I1213 02:00:00.857044 2314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:00:00.858756 kubelet[2314]: I1213 02:00:00.858584 2314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:00:00.861421 kubelet[2314]: E1213 02:00:00.861391 2314 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-4-277531bf34\" not found" Dec 13 02:00:00.914660 systemd[1]: Created slice kubepods-burstable-pod32a7644631ac2ad10bffedccc3d2b4c4.slice - libcontainer container kubepods-burstable-pod32a7644631ac2ad10bffedccc3d2b4c4.slice. Dec 13 02:00:00.929988 systemd[1]: Created slice kubepods-burstable-pod70b5ce24f641fe280a3b7e6dd7964981.slice - libcontainer container kubepods-burstable-pod70b5ce24f641fe280a3b7e6dd7964981.slice. Dec 13 02:00:00.945089 systemd[1]: Created slice kubepods-burstable-podf0e3b4b820021fba042086aaed55baff.slice - libcontainer container kubepods-burstable-podf0e3b4b820021fba042086aaed55baff.slice. Dec 13 02:00:00.960006 kubelet[2314]: I1213 02:00:00.959971 2314 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:00.960458 kubelet[2314]: E1213 02:00:00.960371 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://168.119.247.250:6443/api/v1/nodes\": dial tcp 168.119.247.250:6443: connect: connection refused" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:00.977846 kubelet[2314]: E1213 02:00:00.977751 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.247.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-277531bf34?timeout=10s\": dial tcp 168.119.247.250:6443: connect: connection refused" interval="400ms" Dec 13 02:00:01.077267 kubelet[2314]: I1213 02:00:01.077133 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077267 kubelet[2314]: I1213 02:00:01.077269 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077513 kubelet[2314]: I1213 02:00:01.077341 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077513 kubelet[2314]: I1213 02:00:01.077407 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077513 kubelet[2314]: I1213 02:00:01.077472 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077513 kubelet[2314]: I1213 02:00:01.077505 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077738 kubelet[2314]: I1213 02:00:01.077561 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0e3b4b820021fba042086aaed55baff-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4-277531bf34\" (UID: \"f0e3b4b820021fba042086aaed55baff\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077738 kubelet[2314]: I1213 02:00:01.077700 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.077894 kubelet[2314]: I1213 02:00:01.077779 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.164753 kubelet[2314]: I1213 02:00:01.164626 2314 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.165330 kubelet[2314]: E1213 02:00:01.165292 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://168.119.247.250:6443/api/v1/nodes\": dial tcp 168.119.247.250:6443: connect: connection refused" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.227753 containerd[1476]: time="2024-12-13T02:00:01.227135805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4-277531bf34,Uid:32a7644631ac2ad10bffedccc3d2b4c4,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:01.241157 containerd[1476]: time="2024-12-13T02:00:01.241093184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4-277531bf34,Uid:70b5ce24f641fe280a3b7e6dd7964981,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:01.250603 containerd[1476]: time="2024-12-13T02:00:01.250519701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4-277531bf34,Uid:f0e3b4b820021fba042086aaed55baff,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:01.378590 kubelet[2314]: E1213 02:00:01.378482 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.247.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-277531bf34?timeout=10s\": dial tcp 168.119.247.250:6443: connect: connection refused" interval="800ms" Dec 13 02:00:01.568406 kubelet[2314]: I1213 02:00:01.568351 2314 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.569576 kubelet[2314]: E1213 02:00:01.569525 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://168.119.247.250:6443/api/v1/nodes\": dial tcp 168.119.247.250:6443: connect: connection refused" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:01.775226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126777233.mount: Deactivated successfully. Dec 13 02:00:01.781044 containerd[1476]: time="2024-12-13T02:00:01.780141541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:00:01.782226 containerd[1476]: time="2024-12-13T02:00:01.782177195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 02:00:01.785139 containerd[1476]: time="2024-12-13T02:00:01.785060398Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:00:01.786648 containerd[1476]: time="2024-12-13T02:00:01.786611137Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:00:01.787265 containerd[1476]: time="2024-12-13T02:00:01.787239689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:00:01.789040 containerd[1476]: time="2024-12-13T02:00:01.788831709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:00:01.789040 containerd[1476]: time="2024-12-13T02:00:01.788935987Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:00:01.791764 containerd[1476]: time="2024-12-13T02:00:01.791623512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:00:01.794656 containerd[1476]: time="2024-12-13T02:00:01.794428436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.961177ms" Dec 13 02:00:01.794940 containerd[1476]: time="2024-12-13T02:00:01.794916950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.651226ms" Dec 13 02:00:01.796488 containerd[1476]: time="2024-12-13T02:00:01.796456490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.82263ms" Dec 13 02:00:01.894036 kubelet[2314]: W1213 02:00:01.893443 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.247.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:01.894036 kubelet[2314]: E1213 02:00:01.893547 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.247.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:01.946240 containerd[1476]: time="2024-12-13T02:00:01.946126305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:01.946240 containerd[1476]: time="2024-12-13T02:00:01.946181385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:01.946240 containerd[1476]: time="2024-12-13T02:00:01.946192664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.946585 containerd[1476]: time="2024-12-13T02:00:01.946276263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.948281 containerd[1476]: time="2024-12-13T02:00:01.947954961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:01.948872 containerd[1476]: time="2024-12-13T02:00:01.948739071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:01.948872 containerd[1476]: time="2024-12-13T02:00:01.948763551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.950095 containerd[1476]: time="2024-12-13T02:00:01.950039694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.952268 containerd[1476]: time="2024-12-13T02:00:01.952168947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:01.952268 containerd[1476]: time="2024-12-13T02:00:01.952228946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:01.952268 containerd[1476]: time="2024-12-13T02:00:01.952240706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.952572 containerd[1476]: time="2024-12-13T02:00:01.952328025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:01.976482 systemd[1]: Started cri-containerd-1af7c0390da538ab0d54a08e390df7b05aab64f22c8f9ebf5ca3965d568c3531.scope - libcontainer container 1af7c0390da538ab0d54a08e390df7b05aab64f22c8f9ebf5ca3965d568c3531. Dec 13 02:00:01.983609 systemd[1]: Started cri-containerd-557efc230741aee278a78ee904177f54e35393a72e10f7e9c0ae3b78783c726b.scope - libcontainer container 557efc230741aee278a78ee904177f54e35393a72e10f7e9c0ae3b78783c726b. Dec 13 02:00:01.986743 systemd[1]: Started cri-containerd-9a6261e433b28a5550448acdd4583943829b5c647a02a1d65fa0dae4afbb6aec.scope - libcontainer container 9a6261e433b28a5550448acdd4583943829b5c647a02a1d65fa0dae4afbb6aec. Dec 13 02:00:02.035047 containerd[1476]: time="2024-12-13T02:00:02.031828190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4-277531bf34,Uid:32a7644631ac2ad10bffedccc3d2b4c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1af7c0390da538ab0d54a08e390df7b05aab64f22c8f9ebf5ca3965d568c3531\"" Dec 13 02:00:02.042838 containerd[1476]: time="2024-12-13T02:00:02.042608543Z" level=info msg="CreateContainer within sandbox \"1af7c0390da538ab0d54a08e390df7b05aab64f22c8f9ebf5ca3965d568c3531\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:00:02.053198 containerd[1476]: time="2024-12-13T02:00:02.052408348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4-277531bf34,Uid:70b5ce24f641fe280a3b7e6dd7964981,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a6261e433b28a5550448acdd4583943829b5c647a02a1d65fa0dae4afbb6aec\"" Dec 13 02:00:02.056718 containerd[1476]: time="2024-12-13T02:00:02.056487940Z" level=info msg="CreateContainer within sandbox \"9a6261e433b28a5550448acdd4583943829b5c647a02a1d65fa0dae4afbb6aec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:00:02.066578 containerd[1476]: time="2024-12-13T02:00:02.066384903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4-277531bf34,Uid:f0e3b4b820021fba042086aaed55baff,Namespace:kube-system,Attempt:0,} returns sandbox id \"557efc230741aee278a78ee904177f54e35393a72e10f7e9c0ae3b78783c726b\"" Dec 13 02:00:02.070934 containerd[1476]: time="2024-12-13T02:00:02.070815491Z" level=info msg="CreateContainer within sandbox \"557efc230741aee278a78ee904177f54e35393a72e10f7e9c0ae3b78783c726b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:00:02.075361 containerd[1476]: time="2024-12-13T02:00:02.075192720Z" level=info msg="CreateContainer within sandbox \"1af7c0390da538ab0d54a08e390df7b05aab64f22c8f9ebf5ca3965d568c3531\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc39e62ada2aa91e8ec6ee8c6bf61403f78f32ccd416dc521bf89b1f097c2c4f\"" Dec 13 02:00:02.077109 containerd[1476]: time="2024-12-13T02:00:02.075999950Z" level=info msg="StartContainer for \"dc39e62ada2aa91e8ec6ee8c6bf61403f78f32ccd416dc521bf89b1f097c2c4f\"" Dec 13 02:00:02.089449 containerd[1476]: time="2024-12-13T02:00:02.089277594Z" level=info msg="CreateContainer within sandbox \"9a6261e433b28a5550448acdd4583943829b5c647a02a1d65fa0dae4afbb6aec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3b309a59fc14619d807e57435191348e9a8e3531229c675479488a07b42af16\"" Dec 13 02:00:02.090977 kubelet[2314]: W1213 02:00:02.090728 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.247.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:02.091365 kubelet[2314]: E1213 02:00:02.091334 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.247.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:02.091544 containerd[1476]: time="2024-12-13T02:00:02.091500648Z" level=info msg="StartContainer for \"a3b309a59fc14619d807e57435191348e9a8e3531229c675479488a07b42af16\"" Dec 13 02:00:02.103118 containerd[1476]: time="2024-12-13T02:00:02.103062952Z" level=info msg="CreateContainer within sandbox \"557efc230741aee278a78ee904177f54e35393a72e10f7e9c0ae3b78783c726b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3edc72b7aa5c3b46f48a09e7fd1c2636ab188cffe66175b2eb0ad335ff4a9db\"" Dec 13 02:00:02.103756 containerd[1476]: time="2024-12-13T02:00:02.103726544Z" level=info msg="StartContainer for \"d3edc72b7aa5c3b46f48a09e7fd1c2636ab188cffe66175b2eb0ad335ff4a9db\"" Dec 13 02:00:02.121709 systemd[1]: Started cri-containerd-dc39e62ada2aa91e8ec6ee8c6bf61403f78f32ccd416dc521bf89b1f097c2c4f.scope - libcontainer container dc39e62ada2aa91e8ec6ee8c6bf61403f78f32ccd416dc521bf89b1f097c2c4f. Dec 13 02:00:02.137236 systemd[1]: Started cri-containerd-a3b309a59fc14619d807e57435191348e9a8e3531229c675479488a07b42af16.scope - libcontainer container a3b309a59fc14619d807e57435191348e9a8e3531229c675479488a07b42af16. Dec 13 02:00:02.163158 systemd[1]: Started cri-containerd-d3edc72b7aa5c3b46f48a09e7fd1c2636ab188cffe66175b2eb0ad335ff4a9db.scope - libcontainer container d3edc72b7aa5c3b46f48a09e7fd1c2636ab188cffe66175b2eb0ad335ff4a9db. Dec 13 02:00:02.179852 kubelet[2314]: E1213 02:00:02.179807 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.247.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-277531bf34?timeout=10s\": dial tcp 168.119.247.250:6443: connect: connection refused" interval="1.6s" Dec 13 02:00:02.184041 containerd[1476]: time="2024-12-13T02:00:02.183541205Z" level=info msg="StartContainer for \"dc39e62ada2aa91e8ec6ee8c6bf61403f78f32ccd416dc521bf89b1f097c2c4f\" returns successfully" Dec 13 02:00:02.205716 kubelet[2314]: W1213 02:00:02.205458 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.247.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:02.205716 kubelet[2314]: E1213 02:00:02.205530 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.247.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:02.236840 containerd[1476]: time="2024-12-13T02:00:02.236672340Z" level=info msg="StartContainer for \"a3b309a59fc14619d807e57435191348e9a8e3531229c675479488a07b42af16\" returns successfully" Dec 13 02:00:02.241356 containerd[1476]: time="2024-12-13T02:00:02.241114568Z" level=info msg="StartContainer for \"d3edc72b7aa5c3b46f48a09e7fd1c2636ab188cffe66175b2eb0ad335ff4a9db\" returns successfully" Dec 13 02:00:02.328426 kubelet[2314]: W1213 02:00:02.326430 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.247.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-277531bf34&limit=500&resourceVersion=0": dial tcp 168.119.247.250:6443: connect: connection refused Dec 13 02:00:02.328426 kubelet[2314]: E1213 02:00:02.326500 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.247.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-277531bf34&limit=500&resourceVersion=0\": dial tcp 168.119.247.250:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:00:02.373274 kubelet[2314]: I1213 02:00:02.372553 2314 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:04.888084 kubelet[2314]: E1213 02:00:04.888036 2314 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-4-277531bf34\" not found" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:04.936123 kubelet[2314]: I1213 02:00:04.936078 2314 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:05.755193 kubelet[2314]: I1213 02:00:05.754873 2314 apiserver.go:52] "Watching apiserver" Dec 13 02:00:05.775779 kubelet[2314]: I1213 02:00:05.775751 2314 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:00:07.244204 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-7.scope)... Dec 13 02:00:07.244222 systemd[1]: Reloading... Dec 13 02:00:07.348054 zram_generator::config[2628]: No configuration found. Dec 13 02:00:07.442215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:00:07.521897 systemd[1]: Reloading finished in 277 ms. Dec 13 02:00:07.568401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:00:07.581895 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:00:07.582321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:00:07.582411 systemd[1]: kubelet.service: Consumed 1.416s CPU time, 116.8M memory peak, 0B memory swap peak. Dec 13 02:00:07.589831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:00:07.718342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:00:07.726415 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:00:07.798760 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:07.799886 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:00:07.799886 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:07.799886 kubelet[2674]: I1213 02:00:07.799441 2674 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:00:07.808762 kubelet[2674]: I1213 02:00:07.808701 2674 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:00:07.808762 kubelet[2674]: I1213 02:00:07.808741 2674 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:00:07.809078 kubelet[2674]: I1213 02:00:07.809000 2674 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:00:07.810809 kubelet[2674]: I1213 02:00:07.810757 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:00:07.813842 kubelet[2674]: I1213 02:00:07.813805 2674 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:00:07.817288 kubelet[2674]: E1213 02:00:07.817246 2674 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:00:07.817288 kubelet[2674]: I1213 02:00:07.817284 2674 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:00:07.820126 kubelet[2674]: I1213 02:00:07.820061 2674 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:00:07.820316 kubelet[2674]: I1213 02:00:07.820270 2674 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:00:07.820466 kubelet[2674]: I1213 02:00:07.820422 2674 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:00:07.820662 kubelet[2674]: I1213 02:00:07.820453 2674 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4-277531bf34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:00:07.820662 kubelet[2674]: I1213 02:00:07.820659 2674 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:00:07.820831 kubelet[2674]: I1213 02:00:07.820668 2674 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:00:07.820831 kubelet[2674]: I1213 02:00:07.820701 2674 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:07.820831 kubelet[2674]: I1213 02:00:07.820813 2674 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:00:07.820831 kubelet[2674]: I1213 02:00:07.820828 2674 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:00:07.821000 kubelet[2674]: I1213 02:00:07.820849 2674 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:00:07.825031 kubelet[2674]: I1213 02:00:07.822430 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:00:07.829500 kubelet[2674]: I1213 02:00:07.829389 2674 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:00:07.831751 kubelet[2674]: I1213 02:00:07.830719 2674 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:00:07.833782 kubelet[2674]: I1213 02:00:07.833756 2674 server.go:1269] "Started kubelet" Dec 13 02:00:07.837864 kubelet[2674]: I1213 02:00:07.837842 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:00:07.850036 kubelet[2674]: I1213 02:00:07.849974 2674 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:00:07.853029 kubelet[2674]: I1213 02:00:07.835114 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:00:07.853029 kubelet[2674]: I1213 02:00:07.851404 2674 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:00:07.853300 kubelet[2674]: I1213 02:00:07.853283 2674 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:00:07.853986 kubelet[2674]: I1213 02:00:07.853862 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:00:07.857209 kubelet[2674]: I1213 02:00:07.857188 2674 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:00:07.857606 kubelet[2674]: E1213 02:00:07.857576 2674 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-4-277531bf34\" not found" Dec 13 02:00:07.860824 kubelet[2674]: I1213 02:00:07.860787 2674 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:00:07.862238 kubelet[2674]: I1213 02:00:07.862219 2674 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:00:07.871924 kubelet[2674]: I1213 02:00:07.871061 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:00:07.873671 kubelet[2674]: I1213 02:00:07.873238 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:00:07.873671 kubelet[2674]: I1213 02:00:07.873265 2674 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:00:07.873671 kubelet[2674]: I1213 02:00:07.873295 2674 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:00:07.873671 kubelet[2674]: E1213 02:00:07.873352 2674 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:00:07.880927 kubelet[2674]: E1213 02:00:07.879784 2674 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:00:07.881628 kubelet[2674]: I1213 02:00:07.881582 2674 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:00:07.881628 kubelet[2674]: I1213 02:00:07.881605 2674 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:00:07.881724 kubelet[2674]: I1213 02:00:07.881706 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:00:07.940740 kubelet[2674]: I1213 02:00:07.940700 2674 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:00:07.940907 kubelet[2674]: I1213 02:00:07.940893 2674 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:00:07.940993 kubelet[2674]: I1213 02:00:07.940983 2674 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:07.941299 kubelet[2674]: I1213 02:00:07.941253 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:00:07.941299 kubelet[2674]: I1213 02:00:07.941269 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:00:07.941454 kubelet[2674]: I1213 02:00:07.941289 2674 policy_none.go:49] "None policy: Start" Dec 13 02:00:07.942235 kubelet[2674]: I1213 02:00:07.942116 2674 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:00:07.942235 kubelet[2674]: I1213 02:00:07.942192 2674 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:00:07.943061 kubelet[2674]: I1213 02:00:07.942532 2674 state_mem.go:75] "Updated machine memory state" Dec 13 02:00:07.948286 kubelet[2674]: I1213 02:00:07.948080 2674 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:00:07.948924 kubelet[2674]: I1213 02:00:07.948875 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:00:07.948991 kubelet[2674]: I1213 02:00:07.948923 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:00:07.950088 kubelet[2674]: I1213 02:00:07.949490 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:00:07.982647 kubelet[2674]: E1213 02:00:07.982584 2674 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:07.982810 kubelet[2674]: E1213 02:00:07.982727 2674 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-4-277531bf34\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.055708 kubelet[2674]: I1213 02:00:08.055369 2674 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.063502 kubelet[2674]: I1213 02:00:08.063396 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.063639 kubelet[2674]: I1213 02:00:08.063532 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0e3b4b820021fba042086aaed55baff-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4-277531bf34\" (UID: \"f0e3b4b820021fba042086aaed55baff\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.063665 kubelet[2674]: I1213 02:00:08.063632 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.063686 kubelet[2674]: I1213 02:00:08.063662 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.064438 kubelet[2674]: I1213 02:00:08.063720 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.064438 kubelet[2674]: I1213 02:00:08.063806 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.064438 kubelet[2674]: I1213 02:00:08.063832 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.064438 kubelet[2674]: I1213 02:00:08.064065 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70b5ce24f641fe280a3b7e6dd7964981-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4-277531bf34\" (UID: \"70b5ce24f641fe280a3b7e6dd7964981\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.064438 kubelet[2674]: I1213 02:00:08.064116 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32a7644631ac2ad10bffedccc3d2b4c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4-277531bf34\" (UID: \"32a7644631ac2ad10bffedccc3d2b4c4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.067065 kubelet[2674]: I1213 02:00:08.066901 2674 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.067266 kubelet[2674]: I1213 02:00:08.067241 2674 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-4-277531bf34" Dec 13 02:00:08.825455 kubelet[2674]: I1213 02:00:08.825388 2674 apiserver.go:52] "Watching apiserver" Dec 13 02:00:08.861539 kubelet[2674]: I1213 02:00:08.861463 2674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:00:08.968886 kubelet[2674]: I1213 02:00:08.967445 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-4-277531bf34" podStartSLOduration=1.967428157 podStartE2EDuration="1.967428157s" podCreationTimestamp="2024-12-13 02:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:08.957934206 +0000 UTC m=+1.225500436" watchObservedRunningTime="2024-12-13 02:00:08.967428157 +0000 UTC m=+1.234994387" Dec 13 02:00:09.013841 kubelet[2674]: I1213 02:00:09.013775 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-4-277531bf34" podStartSLOduration=2.01374685 podStartE2EDuration="2.01374685s" podCreationTimestamp="2024-12-13 02:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:08.995065294 +0000 UTC m=+1.262631524" watchObservedRunningTime="2024-12-13 02:00:09.01374685 +0000 UTC m=+1.281313080" Dec 13 02:00:09.036585 kubelet[2674]: I1213 02:00:09.036393 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-4-277531bf34" podStartSLOduration=3.036372436 podStartE2EDuration="3.036372436s" podCreationTimestamp="2024-12-13 02:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:09.015688002 +0000 UTC m=+1.283254232" watchObservedRunningTime="2024-12-13 02:00:09.036372436 +0000 UTC m=+1.303938626" Dec 13 02:00:13.190758 sudo[1844]: pam_unix(sudo:session): session closed for user root Dec 13 02:00:13.352455 sshd[1841]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:13.356863 systemd[1]: sshd@6-168.119.247.250:22-147.75.109.163:33816.service: Deactivated successfully. Dec 13 02:00:13.360002 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:00:13.362117 systemd[1]: session-7.scope: Consumed 7.225s CPU time, 153.8M memory peak, 0B memory swap peak. Dec 13 02:00:13.364861 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:00:13.367126 systemd-logind[1450]: Removed session 7. Dec 13 02:00:14.232911 kubelet[2674]: I1213 02:00:14.232877 2674 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:00:14.234128 kubelet[2674]: I1213 02:00:14.233501 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:00:14.234376 containerd[1476]: time="2024-12-13T02:00:14.233294142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:00:15.101168 systemd[1]: Created slice kubepods-besteffort-podd90edc7e_637f_4ced_bca5_3009ef037cd1.slice - libcontainer container kubepods-besteffort-podd90edc7e_637f_4ced_bca5_3009ef037cd1.slice. Dec 13 02:00:15.207212 kubelet[2674]: I1213 02:00:15.206966 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6bth\" (UniqueName: \"kubernetes.io/projected/d90edc7e-637f-4ced-bca5-3009ef037cd1-kube-api-access-z6bth\") pod \"kube-proxy-l8zvr\" (UID: \"d90edc7e-637f-4ced-bca5-3009ef037cd1\") " pod="kube-system/kube-proxy-l8zvr" Dec 13 02:00:15.207212 kubelet[2674]: I1213 02:00:15.207026 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d90edc7e-637f-4ced-bca5-3009ef037cd1-kube-proxy\") pod \"kube-proxy-l8zvr\" (UID: \"d90edc7e-637f-4ced-bca5-3009ef037cd1\") " pod="kube-system/kube-proxy-l8zvr" Dec 13 02:00:15.207212 kubelet[2674]: I1213 02:00:15.207048 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90edc7e-637f-4ced-bca5-3009ef037cd1-xtables-lock\") pod \"kube-proxy-l8zvr\" (UID: \"d90edc7e-637f-4ced-bca5-3009ef037cd1\") " pod="kube-system/kube-proxy-l8zvr" Dec 13 02:00:15.207212 kubelet[2674]: I1213 02:00:15.207067 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90edc7e-637f-4ced-bca5-3009ef037cd1-lib-modules\") pod \"kube-proxy-l8zvr\" (UID: \"d90edc7e-637f-4ced-bca5-3009ef037cd1\") " pod="kube-system/kube-proxy-l8zvr" Dec 13 02:00:15.340113 systemd[1]: Created slice kubepods-besteffort-poded035b06_5976_4b65_9bc6_f267216e571d.slice - libcontainer container kubepods-besteffort-poded035b06_5976_4b65_9bc6_f267216e571d.slice. Dec 13 02:00:15.411234 kubelet[2674]: I1213 02:00:15.408661 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ed035b06-5976-4b65-9bc6-f267216e571d-var-lib-calico\") pod \"tigera-operator-76c4976dd7-5g2hj\" (UID: \"ed035b06-5976-4b65-9bc6-f267216e571d\") " pod="tigera-operator/tigera-operator-76c4976dd7-5g2hj" Dec 13 02:00:15.411234 kubelet[2674]: I1213 02:00:15.408724 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdclk\" (UniqueName: \"kubernetes.io/projected/ed035b06-5976-4b65-9bc6-f267216e571d-kube-api-access-kdclk\") pod \"tigera-operator-76c4976dd7-5g2hj\" (UID: \"ed035b06-5976-4b65-9bc6-f267216e571d\") " pod="tigera-operator/tigera-operator-76c4976dd7-5g2hj" Dec 13 02:00:15.412706 containerd[1476]: time="2024-12-13T02:00:15.412526611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l8zvr,Uid:d90edc7e-637f-4ced-bca5-3009ef037cd1,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:15.444330 containerd[1476]: time="2024-12-13T02:00:15.444187366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:15.444330 containerd[1476]: time="2024-12-13T02:00:15.444258606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:15.444607 containerd[1476]: time="2024-12-13T02:00:15.444274486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:15.444806 containerd[1476]: time="2024-12-13T02:00:15.444762967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:15.467231 systemd[1]: Started cri-containerd-6213d957e74b4f2c75c22b57a320d4fc979cae2327eb1a65b87f0c2417dd25fc.scope - libcontainer container 6213d957e74b4f2c75c22b57a320d4fc979cae2327eb1a65b87f0c2417dd25fc. Dec 13 02:00:15.498228 containerd[1476]: time="2024-12-13T02:00:15.498168985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l8zvr,Uid:d90edc7e-637f-4ced-bca5-3009ef037cd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6213d957e74b4f2c75c22b57a320d4fc979cae2327eb1a65b87f0c2417dd25fc\"" Dec 13 02:00:15.502954 containerd[1476]: time="2024-12-13T02:00:15.502827471Z" level=info msg="CreateContainer within sandbox \"6213d957e74b4f2c75c22b57a320d4fc979cae2327eb1a65b87f0c2417dd25fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:00:15.523622 containerd[1476]: time="2024-12-13T02:00:15.523556093Z" level=info msg="CreateContainer within sandbox \"6213d957e74b4f2c75c22b57a320d4fc979cae2327eb1a65b87f0c2417dd25fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b318fe9b633fd7e9aa2edef9cf162810b66af8cbf620e9324bd69e5dba23540d\"" Dec 13 02:00:15.526054 containerd[1476]: time="2024-12-13T02:00:15.525976016Z" level=info msg="StartContainer for \"b318fe9b633fd7e9aa2edef9cf162810b66af8cbf620e9324bd69e5dba23540d\"" Dec 13 02:00:15.557293 systemd[1]: Started cri-containerd-b318fe9b633fd7e9aa2edef9cf162810b66af8cbf620e9324bd69e5dba23540d.scope - libcontainer container b318fe9b633fd7e9aa2edef9cf162810b66af8cbf620e9324bd69e5dba23540d. Dec 13 02:00:15.606982 containerd[1476]: time="2024-12-13T02:00:15.606902625Z" level=info msg="StartContainer for \"b318fe9b633fd7e9aa2edef9cf162810b66af8cbf620e9324bd69e5dba23540d\" returns successfully" Dec 13 02:00:15.645232 containerd[1476]: time="2024-12-13T02:00:15.644136785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-5g2hj,Uid:ed035b06-5976-4b65-9bc6-f267216e571d,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:00:15.685147 containerd[1476]: time="2024-12-13T02:00:15.684218309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:15.685147 containerd[1476]: time="2024-12-13T02:00:15.684266869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:15.685147 containerd[1476]: time="2024-12-13T02:00:15.684277990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:15.685147 containerd[1476]: time="2024-12-13T02:00:15.684355910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:15.717386 systemd[1]: Started cri-containerd-547167ffc976b31ff78b0ee3107a4bcba2e768889dcac3feb1eea795ee2ec677.scope - libcontainer container 547167ffc976b31ff78b0ee3107a4bcba2e768889dcac3feb1eea795ee2ec677. Dec 13 02:00:15.755618 containerd[1476]: time="2024-12-13T02:00:15.755554428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-5g2hj,Uid:ed035b06-5976-4b65-9bc6-f267216e571d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"547167ffc976b31ff78b0ee3107a4bcba2e768889dcac3feb1eea795ee2ec677\"" Dec 13 02:00:15.759412 containerd[1476]: time="2024-12-13T02:00:15.759369552Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:00:15.964282 kubelet[2674]: I1213 02:00:15.963772 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l8zvr" podStartSLOduration=0.963752576 podStartE2EDuration="963.752576ms" podCreationTimestamp="2024-12-13 02:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:15.951828883 +0000 UTC m=+8.219395113" watchObservedRunningTime="2024-12-13 02:00:15.963752576 +0000 UTC m=+8.231318766" Dec 13 02:00:16.338042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269412023.mount: Deactivated successfully. Dec 13 02:00:20.310254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70326659.mount: Deactivated successfully. Dec 13 02:00:20.627089 containerd[1476]: time="2024-12-13T02:00:20.626932859Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:20.629404 containerd[1476]: time="2024-12-13T02:00:20.629369391Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125388" Dec 13 02:00:20.630323 containerd[1476]: time="2024-12-13T02:00:20.630287755Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:20.633221 containerd[1476]: time="2024-12-13T02:00:20.633182809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:20.635488 containerd[1476]: time="2024-12-13T02:00:20.635185459Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.875775667s" Dec 13 02:00:20.635488 containerd[1476]: time="2024-12-13T02:00:20.635224499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 02:00:20.638255 containerd[1476]: time="2024-12-13T02:00:20.638187633Z" level=info msg="CreateContainer within sandbox \"547167ffc976b31ff78b0ee3107a4bcba2e768889dcac3feb1eea795ee2ec677\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:00:20.653785 containerd[1476]: time="2024-12-13T02:00:20.653625587Z" level=info msg="CreateContainer within sandbox \"547167ffc976b31ff78b0ee3107a4bcba2e768889dcac3feb1eea795ee2ec677\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8881aa7f207dba4cf3935e9f6f1d370d6ba8ad44ccb80efb51b5d6d8baee7d8\"" Dec 13 02:00:20.655845 containerd[1476]: time="2024-12-13T02:00:20.655809517Z" level=info msg="StartContainer for \"c8881aa7f207dba4cf3935e9f6f1d370d6ba8ad44ccb80efb51b5d6d8baee7d8\"" Dec 13 02:00:20.689257 systemd[1]: Started cri-containerd-c8881aa7f207dba4cf3935e9f6f1d370d6ba8ad44ccb80efb51b5d6d8baee7d8.scope - libcontainer container c8881aa7f207dba4cf3935e9f6f1d370d6ba8ad44ccb80efb51b5d6d8baee7d8. Dec 13 02:00:20.723480 containerd[1476]: time="2024-12-13T02:00:20.723405761Z" level=info msg="StartContainer for \"c8881aa7f207dba4cf3935e9f6f1d370d6ba8ad44ccb80efb51b5d6d8baee7d8\" returns successfully" Dec 13 02:00:20.966344 kubelet[2674]: I1213 02:00:20.966191 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-5g2hj" podStartSLOduration=1.08756625 podStartE2EDuration="5.966174244s" podCreationTimestamp="2024-12-13 02:00:15 +0000 UTC" firstStartedPulling="2024-12-13 02:00:15.75768239 +0000 UTC m=+8.025248620" lastFinishedPulling="2024-12-13 02:00:20.636290384 +0000 UTC m=+12.903856614" observedRunningTime="2024-12-13 02:00:20.965564561 +0000 UTC m=+13.233130831" watchObservedRunningTime="2024-12-13 02:00:20.966174244 +0000 UTC m=+13.233740474" Dec 13 02:00:24.732554 systemd[1]: Created slice kubepods-besteffort-pod1ee28c33_97c3_4359_a74e_682d75fb090d.slice - libcontainer container kubepods-besteffort-pod1ee28c33_97c3_4359_a74e_682d75fb090d.slice. Dec 13 02:00:24.869044 kubelet[2674]: I1213 02:00:24.868241 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ee28c33-97c3-4359-a74e-682d75fb090d-typha-certs\") pod \"calico-typha-66c457cf7-xk5r4\" (UID: \"1ee28c33-97c3-4359-a74e-682d75fb090d\") " pod="calico-system/calico-typha-66c457cf7-xk5r4" Dec 13 02:00:24.869044 kubelet[2674]: I1213 02:00:24.868359 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ee28c33-97c3-4359-a74e-682d75fb090d-tigera-ca-bundle\") pod \"calico-typha-66c457cf7-xk5r4\" (UID: \"1ee28c33-97c3-4359-a74e-682d75fb090d\") " pod="calico-system/calico-typha-66c457cf7-xk5r4" Dec 13 02:00:24.869044 kubelet[2674]: I1213 02:00:24.868391 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5bl\" (UniqueName: \"kubernetes.io/projected/1ee28c33-97c3-4359-a74e-682d75fb090d-kube-api-access-5t5bl\") pod \"calico-typha-66c457cf7-xk5r4\" (UID: \"1ee28c33-97c3-4359-a74e-682d75fb090d\") " pod="calico-system/calico-typha-66c457cf7-xk5r4" Dec 13 02:00:24.882517 systemd[1]: Created slice kubepods-besteffort-podc1638a21_c20f_4b6f_8c3c_1ac0d6e9914b.slice - libcontainer container kubepods-besteffort-podc1638a21_c20f_4b6f_8c3c_1ac0d6e9914b.slice. Dec 13 02:00:24.969760 kubelet[2674]: I1213 02:00:24.968674 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-flexvol-driver-host\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.969760 kubelet[2674]: I1213 02:00:24.968754 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gszn\" (UniqueName: \"kubernetes.io/projected/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-kube-api-access-4gszn\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.969760 kubelet[2674]: I1213 02:00:24.968794 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-xtables-lock\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.969760 kubelet[2674]: I1213 02:00:24.968836 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-cni-net-dir\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.969760 kubelet[2674]: I1213 02:00:24.968897 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-var-run-calico\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.970108 kubelet[2674]: I1213 02:00:24.968934 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-cni-bin-dir\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.970108 kubelet[2674]: I1213 02:00:24.968972 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-cni-log-dir\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.970108 kubelet[2674]: I1213 02:00:24.969062 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-tigera-ca-bundle\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.972043 kubelet[2674]: I1213 02:00:24.970576 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-policysync\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.972043 kubelet[2674]: I1213 02:00:24.970622 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-node-certs\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.972043 kubelet[2674]: I1213 02:00:24.970649 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-lib-modules\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.972043 kubelet[2674]: I1213 02:00:24.970673 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b-var-lib-calico\") pod \"calico-node-kbxcz\" (UID: \"c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b\") " pod="calico-system/calico-node-kbxcz" Dec 13 02:00:24.994539 kubelet[2674]: E1213 02:00:24.993874 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:25.038099 containerd[1476]: time="2024-12-13T02:00:25.038049172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c457cf7-xk5r4,Uid:1ee28c33-97c3-4359-a74e-682d75fb090d,Namespace:calico-system,Attempt:0,}" Dec 13 02:00:25.076445 kubelet[2674]: E1213 02:00:25.076310 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.076445 kubelet[2674]: W1213 02:00:25.076337 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.077980 kubelet[2674]: E1213 02:00:25.077955 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.079200 kubelet[2674]: E1213 02:00:25.078969 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.079200 kubelet[2674]: W1213 02:00:25.078998 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.079200 kubelet[2674]: E1213 02:00:25.079034 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.083262 kubelet[2674]: E1213 02:00:25.082407 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.083262 kubelet[2674]: W1213 02:00:25.082610 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.083262 kubelet[2674]: E1213 02:00:25.083223 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.086705 kubelet[2674]: E1213 02:00:25.086673 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.086705 kubelet[2674]: W1213 02:00:25.086698 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.089119 kubelet[2674]: E1213 02:00:25.089074 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.089287 kubelet[2674]: E1213 02:00:25.089263 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.090506 kubelet[2674]: W1213 02:00:25.090460 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.090860 kubelet[2674]: E1213 02:00:25.090787 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.091789 kubelet[2674]: E1213 02:00:25.091597 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.091789 kubelet[2674]: W1213 02:00:25.091623 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.092781 kubelet[2674]: E1213 02:00:25.091944 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.093089 kubelet[2674]: E1213 02:00:25.093062 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.093089 kubelet[2674]: W1213 02:00:25.093081 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.093336 kubelet[2674]: E1213 02:00:25.093201 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.094009 kubelet[2674]: E1213 02:00:25.093987 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.094009 kubelet[2674]: W1213 02:00:25.094002 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.095097 kubelet[2674]: E1213 02:00:25.095067 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.095634 kubelet[2674]: E1213 02:00:25.095509 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.095634 kubelet[2674]: W1213 02:00:25.095526 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.095955 containerd[1476]: time="2024-12-13T02:00:25.092844647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:25.095955 containerd[1476]: time="2024-12-13T02:00:25.095803591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:25.095955 containerd[1476]: time="2024-12-13T02:00:25.095847151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:25.096084 kubelet[2674]: E1213 02:00:25.095815 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.096559 containerd[1476]: time="2024-12-13T02:00:25.096385515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:25.096616 kubelet[2674]: E1213 02:00:25.096514 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.096645 kubelet[2674]: W1213 02:00:25.096619 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.098450 kubelet[2674]: E1213 02:00:25.098349 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.099686 kubelet[2674]: E1213 02:00:25.099612 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.099686 kubelet[2674]: W1213 02:00:25.099630 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.100001 kubelet[2674]: E1213 02:00:25.099899 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.100966 kubelet[2674]: E1213 02:00:25.100348 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.100966 kubelet[2674]: W1213 02:00:25.100366 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.100966 kubelet[2674]: E1213 02:00:25.100456 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.100966 kubelet[2674]: E1213 02:00:25.100817 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.100966 kubelet[2674]: W1213 02:00:25.100839 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.100966 kubelet[2674]: E1213 02:00:25.100852 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.105081 kubelet[2674]: E1213 02:00:25.104334 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.105081 kubelet[2674]: W1213 02:00:25.104357 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.105081 kubelet[2674]: E1213 02:00:25.104378 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.111148 kubelet[2674]: E1213 02:00:25.111111 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.111148 kubelet[2674]: W1213 02:00:25.111136 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.111933 kubelet[2674]: E1213 02:00:25.111158 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.113739 kubelet[2674]: E1213 02:00:25.113705 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.113739 kubelet[2674]: W1213 02:00:25.113727 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.113969 kubelet[2674]: E1213 02:00:25.113753 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.114230 kubelet[2674]: E1213 02:00:25.114207 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.114230 kubelet[2674]: W1213 02:00:25.114224 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.114507 kubelet[2674]: E1213 02:00:25.114335 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.114957 kubelet[2674]: E1213 02:00:25.114775 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.114957 kubelet[2674]: W1213 02:00:25.114793 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.114957 kubelet[2674]: E1213 02:00:25.114818 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.115420 kubelet[2674]: E1213 02:00:25.115242 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.115420 kubelet[2674]: W1213 02:00:25.115257 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.115420 kubelet[2674]: E1213 02:00:25.115269 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.115767 kubelet[2674]: E1213 02:00:25.115748 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.115767 kubelet[2674]: W1213 02:00:25.115766 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.115846 kubelet[2674]: E1213 02:00:25.115779 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.117513 kubelet[2674]: E1213 02:00:25.117077 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.117513 kubelet[2674]: W1213 02:00:25.117100 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.117513 kubelet[2674]: E1213 02:00:25.117117 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.118074 kubelet[2674]: E1213 02:00:25.118044 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.118074 kubelet[2674]: W1213 02:00:25.118066 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.118162 kubelet[2674]: E1213 02:00:25.118082 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.118916 kubelet[2674]: E1213 02:00:25.118689 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.118916 kubelet[2674]: W1213 02:00:25.118708 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.118916 kubelet[2674]: E1213 02:00:25.118721 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.119501 kubelet[2674]: E1213 02:00:25.119468 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.119501 kubelet[2674]: W1213 02:00:25.119490 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.119501 kubelet[2674]: E1213 02:00:25.119503 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.120090 kubelet[2674]: E1213 02:00:25.120050 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.120090 kubelet[2674]: W1213 02:00:25.120068 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.120090 kubelet[2674]: E1213 02:00:25.120080 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.120710 kubelet[2674]: E1213 02:00:25.120683 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.120710 kubelet[2674]: W1213 02:00:25.120702 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.120801 kubelet[2674]: E1213 02:00:25.120714 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.121271 kubelet[2674]: E1213 02:00:25.121247 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.121271 kubelet[2674]: W1213 02:00:25.121263 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.121371 kubelet[2674]: E1213 02:00:25.121276 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.122185 kubelet[2674]: E1213 02:00:25.121960 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.122185 kubelet[2674]: W1213 02:00:25.121981 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.122185 kubelet[2674]: E1213 02:00:25.121994 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.123053 kubelet[2674]: E1213 02:00:25.122809 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.123053 kubelet[2674]: W1213 02:00:25.122829 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.123053 kubelet[2674]: E1213 02:00:25.122841 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.124204 kubelet[2674]: E1213 02:00:25.123284 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.124204 kubelet[2674]: W1213 02:00:25.123296 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.124204 kubelet[2674]: E1213 02:00:25.123311 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.124204 kubelet[2674]: E1213 02:00:25.123490 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.124204 kubelet[2674]: W1213 02:00:25.123497 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.124204 kubelet[2674]: E1213 02:00:25.123505 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.136234 systemd[1]: Started cri-containerd-d43144fe716a20d490aadb118d34084960f3bfd679d306a792feaf81ff85c8b0.scope - libcontainer container d43144fe716a20d490aadb118d34084960f3bfd679d306a792feaf81ff85c8b0. Dec 13 02:00:25.173414 containerd[1476]: time="2024-12-13T02:00:25.173239086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c457cf7-xk5r4,Uid:1ee28c33-97c3-4359-a74e-682d75fb090d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d43144fe716a20d490aadb118d34084960f3bfd679d306a792feaf81ff85c8b0\"" Dec 13 02:00:25.175151 kubelet[2674]: E1213 02:00:25.174792 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.175151 kubelet[2674]: W1213 02:00:25.174822 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.175151 kubelet[2674]: E1213 02:00:25.174842 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.175151 kubelet[2674]: I1213 02:00:25.174903 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c72753a3-f6db-44d3-aefe-df577750df39-varrun\") pod \"csi-node-driver-mb9nr\" (UID: \"c72753a3-f6db-44d3-aefe-df577750df39\") " pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:25.176845 kubelet[2674]: E1213 02:00:25.175963 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.176845 kubelet[2674]: W1213 02:00:25.176286 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.176845 kubelet[2674]: E1213 02:00:25.176316 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.176845 kubelet[2674]: I1213 02:00:25.176341 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c72753a3-f6db-44d3-aefe-df577750df39-registration-dir\") pod \"csi-node-driver-mb9nr\" (UID: \"c72753a3-f6db-44d3-aefe-df577750df39\") " pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:25.177054 kubelet[2674]: E1213 02:00:25.176899 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.177054 kubelet[2674]: W1213 02:00:25.176913 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.177673 kubelet[2674]: E1213 02:00:25.177123 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.177673 kubelet[2674]: I1213 02:00:25.177151 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c72753a3-f6db-44d3-aefe-df577750df39-socket-dir\") pod \"csi-node-driver-mb9nr\" (UID: \"c72753a3-f6db-44d3-aefe-df577750df39\") " pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:25.177765 kubelet[2674]: E1213 02:00:25.177676 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.177765 kubelet[2674]: W1213 02:00:25.177748 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179053 kubelet[2674]: E1213 02:00:25.177763 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179053 kubelet[2674]: I1213 02:00:25.177893 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72753a3-f6db-44d3-aefe-df577750df39-kubelet-dir\") pod \"csi-node-driver-mb9nr\" (UID: \"c72753a3-f6db-44d3-aefe-df577750df39\") " pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:25.179053 kubelet[2674]: E1213 02:00:25.178247 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179053 kubelet[2674]: W1213 02:00:25.178260 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179053 kubelet[2674]: E1213 02:00:25.178412 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179053 kubelet[2674]: W1213 02:00:25.178421 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179053 kubelet[2674]: E1213 02:00:25.178561 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179053 kubelet[2674]: W1213 02:00:25.178568 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179053 kubelet[2674]: E1213 02:00:25.178577 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.178749 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179317 kubelet[2674]: I1213 02:00:25.178772 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vvh\" (UniqueName: \"kubernetes.io/projected/c72753a3-f6db-44d3-aefe-df577750df39-kube-api-access-n4vvh\") pod \"csi-node-driver-mb9nr\" (UID: \"c72753a3-f6db-44d3-aefe-df577750df39\") " pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.178783 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.178833 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179317 kubelet[2674]: W1213 02:00:25.178840 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.178847 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.179045 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179317 kubelet[2674]: W1213 02:00:25.179054 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179317 kubelet[2674]: E1213 02:00:25.179068 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179519 kubelet[2674]: E1213 02:00:25.179228 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179519 kubelet[2674]: W1213 02:00:25.179237 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179519 kubelet[2674]: E1213 02:00:25.179245 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.179519 kubelet[2674]: E1213 02:00:25.179450 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.179519 kubelet[2674]: W1213 02:00:25.179459 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.179519 kubelet[2674]: E1213 02:00:25.179470 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.179656 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.180972 kubelet[2674]: W1213 02:00:25.179665 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.179694 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.179863 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.180972 kubelet[2674]: W1213 02:00:25.179873 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.179881 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.180095 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.180972 kubelet[2674]: W1213 02:00:25.180104 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.180113 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.180972 kubelet[2674]: E1213 02:00:25.180428 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.182385 containerd[1476]: time="2024-12-13T02:00:25.180378343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:00:25.182424 kubelet[2674]: W1213 02:00:25.180439 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.182424 kubelet[2674]: E1213 02:00:25.180485 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.188053 containerd[1476]: time="2024-12-13T02:00:25.187873682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbxcz,Uid:c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b,Namespace:calico-system,Attempt:0,}" Dec 13 02:00:25.222622 containerd[1476]: time="2024-12-13T02:00:25.222299036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:25.222622 containerd[1476]: time="2024-12-13T02:00:25.222364836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:25.222622 containerd[1476]: time="2024-12-13T02:00:25.222389476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:25.223079 containerd[1476]: time="2024-12-13T02:00:25.222589038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:25.244197 systemd[1]: Started cri-containerd-c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a.scope - libcontainer container c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a. Dec 13 02:00:25.280984 containerd[1476]: time="2024-12-13T02:00:25.278992526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbxcz,Uid:c1638a21-c20f-4b6f-8c3c-1ac0d6e9914b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\"" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.281408 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.285933 kubelet[2674]: W1213 02:00:25.281586 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.281611 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.282400 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.285933 kubelet[2674]: W1213 02:00:25.282415 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.282429 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.283659 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.285933 kubelet[2674]: W1213 02:00:25.283721 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.283834 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.285933 kubelet[2674]: E1213 02:00:25.284266 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286268 kubelet[2674]: W1213 02:00:25.284278 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.284396 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.284598 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286268 kubelet[2674]: W1213 02:00:25.284629 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.284714 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.284880 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286268 kubelet[2674]: W1213 02:00:25.284888 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.284905 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286268 kubelet[2674]: E1213 02:00:25.285216 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286268 kubelet[2674]: W1213 02:00:25.285226 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.285315 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.285766 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286472 kubelet[2674]: W1213 02:00:25.285778 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.285797 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.286030 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286472 kubelet[2674]: W1213 02:00:25.286039 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.286084 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.286269 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.286472 kubelet[2674]: W1213 02:00:25.286277 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.286472 kubelet[2674]: E1213 02:00:25.286333 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.286501 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287585 kubelet[2674]: W1213 02:00:25.286510 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.286567 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.286783 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287585 kubelet[2674]: W1213 02:00:25.286792 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.286842 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.287077 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287585 kubelet[2674]: W1213 02:00:25.287086 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.287194 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287585 kubelet[2674]: E1213 02:00:25.287375 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287886 kubelet[2674]: W1213 02:00:25.287384 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.287886 kubelet[2674]: E1213 02:00:25.287460 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287886 kubelet[2674]: E1213 02:00:25.287649 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287886 kubelet[2674]: W1213 02:00:25.287658 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.287886 kubelet[2674]: E1213 02:00:25.287716 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.287886 kubelet[2674]: E1213 02:00:25.287884 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.287886 kubelet[2674]: W1213 02:00:25.287891 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.288307 kubelet[2674]: E1213 02:00:25.288134 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.288373 kubelet[2674]: E1213 02:00:25.288348 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.288373 kubelet[2674]: W1213 02:00:25.288365 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.289108 kubelet[2674]: E1213 02:00:25.288918 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.289108 kubelet[2674]: E1213 02:00:25.289110 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.289201 kubelet[2674]: W1213 02:00:25.289120 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.289472 kubelet[2674]: E1213 02:00:25.289209 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.289472 kubelet[2674]: E1213 02:00:25.289347 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.289472 kubelet[2674]: W1213 02:00:25.289355 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.289474 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.289661 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291066 kubelet[2674]: W1213 02:00:25.289670 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.289762 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.289885 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291066 kubelet[2674]: W1213 02:00:25.289892 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.289904 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.290397 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291066 kubelet[2674]: W1213 02:00:25.290409 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291066 kubelet[2674]: E1213 02:00:25.290429 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290631 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291388 kubelet[2674]: W1213 02:00:25.290640 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290719 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290810 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291388 kubelet[2674]: W1213 02:00:25.290818 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290826 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290975 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.291388 kubelet[2674]: W1213 02:00:25.290982 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.291388 kubelet[2674]: E1213 02:00:25.290990 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:25.310090 kubelet[2674]: E1213 02:00:25.310008 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:25.310090 kubelet[2674]: W1213 02:00:25.310087 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:25.310310 kubelet[2674]: E1213 02:00:25.310116 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:26.761669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886196785.mount: Deactivated successfully. Dec 13 02:00:26.874216 kubelet[2674]: E1213 02:00:26.874159 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:27.585097 containerd[1476]: time="2024-12-13T02:00:27.584312948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:27.585097 containerd[1476]: time="2024-12-13T02:00:27.585061114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 02:00:27.585694 containerd[1476]: time="2024-12-13T02:00:27.585667280Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:27.587830 containerd[1476]: time="2024-12-13T02:00:27.587766939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:27.588686 containerd[1476]: time="2024-12-13T02:00:27.588655187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.408239524s" Dec 13 02:00:27.588740 containerd[1476]: time="2024-12-13T02:00:27.588688067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 02:00:27.590903 containerd[1476]: time="2024-12-13T02:00:27.590865047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:00:27.600268 containerd[1476]: time="2024-12-13T02:00:27.600218332Z" level=info msg="CreateContainer within sandbox \"d43144fe716a20d490aadb118d34084960f3bfd679d306a792feaf81ff85c8b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:00:27.620620 containerd[1476]: time="2024-12-13T02:00:27.620465115Z" level=info msg="CreateContainer within sandbox \"d43144fe716a20d490aadb118d34084960f3bfd679d306a792feaf81ff85c8b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"984bb378dc05397dd1103bd4af4e39446fa463c516648c85888225bda136ade6\"" Dec 13 02:00:27.621203 containerd[1476]: time="2024-12-13T02:00:27.621056041Z" level=info msg="StartContainer for \"984bb378dc05397dd1103bd4af4e39446fa463c516648c85888225bda136ade6\"" Dec 13 02:00:27.658302 systemd[1]: Started cri-containerd-984bb378dc05397dd1103bd4af4e39446fa463c516648c85888225bda136ade6.scope - libcontainer container 984bb378dc05397dd1103bd4af4e39446fa463c516648c85888225bda136ade6. Dec 13 02:00:27.704647 containerd[1476]: time="2024-12-13T02:00:27.704441997Z" level=info msg="StartContainer for \"984bb378dc05397dd1103bd4af4e39446fa463c516648c85888225bda136ade6\" returns successfully" Dec 13 02:00:27.992919 kubelet[2674]: I1213 02:00:27.992257 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c457cf7-xk5r4" podStartSLOduration=1.5802798519999999 podStartE2EDuration="3.992093366s" podCreationTimestamp="2024-12-13 02:00:24 +0000 UTC" firstStartedPulling="2024-12-13 02:00:25.178308966 +0000 UTC m=+17.445875156" lastFinishedPulling="2024-12-13 02:00:27.59012244 +0000 UTC m=+19.857688670" observedRunningTime="2024-12-13 02:00:27.991918565 +0000 UTC m=+20.259484795" watchObservedRunningTime="2024-12-13 02:00:27.992093366 +0000 UTC m=+20.259659596" Dec 13 02:00:28.045723 kubelet[2674]: E1213 02:00:28.045642 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.045723 kubelet[2674]: W1213 02:00:28.045724 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.046195 kubelet[2674]: E1213 02:00:28.045776 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.046462 kubelet[2674]: E1213 02:00:28.046301 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.046462 kubelet[2674]: W1213 02:00:28.046329 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.046462 kubelet[2674]: E1213 02:00:28.046350 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.047222 kubelet[2674]: E1213 02:00:28.046774 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.047222 kubelet[2674]: W1213 02:00:28.046825 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.047222 kubelet[2674]: E1213 02:00:28.046851 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.047482 kubelet[2674]: E1213 02:00:28.047347 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.047482 kubelet[2674]: W1213 02:00:28.047366 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.047482 kubelet[2674]: E1213 02:00:28.047385 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.048857 kubelet[2674]: E1213 02:00:28.048825 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.048857 kubelet[2674]: W1213 02:00:28.048854 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.049156 kubelet[2674]: E1213 02:00:28.048880 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.051128 kubelet[2674]: E1213 02:00:28.051076 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.051128 kubelet[2674]: W1213 02:00:28.051104 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.051128 kubelet[2674]: E1213 02:00:28.051127 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.052411 kubelet[2674]: E1213 02:00:28.052077 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.052411 kubelet[2674]: W1213 02:00:28.052107 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.052411 kubelet[2674]: E1213 02:00:28.052131 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.053107 kubelet[2674]: E1213 02:00:28.053065 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.053107 kubelet[2674]: W1213 02:00:28.053100 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.053692 kubelet[2674]: E1213 02:00:28.053126 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.054357 kubelet[2674]: E1213 02:00:28.054151 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.054357 kubelet[2674]: W1213 02:00:28.054174 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.054357 kubelet[2674]: E1213 02:00:28.054205 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.054862 kubelet[2674]: E1213 02:00:28.054724 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.054862 kubelet[2674]: W1213 02:00:28.054740 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.054862 kubelet[2674]: E1213 02:00:28.054768 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.055114 kubelet[2674]: E1213 02:00:28.055100 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.055114 kubelet[2674]: W1213 02:00:28.055141 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.055114 kubelet[2674]: E1213 02:00:28.055160 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.055581 kubelet[2674]: E1213 02:00:28.055565 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.055581 kubelet[2674]: W1213 02:00:28.055608 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.055581 kubelet[2674]: E1213 02:00:28.055625 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.056116 kubelet[2674]: E1213 02:00:28.056051 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.056116 kubelet[2674]: W1213 02:00:28.056068 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.056116 kubelet[2674]: E1213 02:00:28.056081 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.056841 kubelet[2674]: E1213 02:00:28.056779 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.056841 kubelet[2674]: W1213 02:00:28.056799 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.056841 kubelet[2674]: E1213 02:00:28.056816 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.057374 kubelet[2674]: E1213 02:00:28.057274 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.057374 kubelet[2674]: W1213 02:00:28.057300 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.057374 kubelet[2674]: E1213 02:00:28.057313 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.110260 kubelet[2674]: E1213 02:00:28.110094 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.110260 kubelet[2674]: W1213 02:00:28.110121 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.110260 kubelet[2674]: E1213 02:00:28.110160 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.110957 kubelet[2674]: E1213 02:00:28.110806 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.110957 kubelet[2674]: W1213 02:00:28.110820 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.110957 kubelet[2674]: E1213 02:00:28.110897 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.111249 kubelet[2674]: E1213 02:00:28.111234 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.111249 kubelet[2674]: W1213 02:00:28.111249 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.111325 kubelet[2674]: E1213 02:00:28.111279 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.111815 kubelet[2674]: E1213 02:00:28.111778 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.111815 kubelet[2674]: W1213 02:00:28.111794 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.111815 kubelet[2674]: E1213 02:00:28.111814 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.112162 kubelet[2674]: E1213 02:00:28.112145 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.112162 kubelet[2674]: W1213 02:00:28.112158 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.112162 kubelet[2674]: E1213 02:00:28.112191 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.114058 kubelet[2674]: E1213 02:00:28.112980 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.114058 kubelet[2674]: W1213 02:00:28.113002 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.114058 kubelet[2674]: E1213 02:00:28.113234 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.114600 kubelet[2674]: E1213 02:00:28.114271 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.114600 kubelet[2674]: W1213 02:00:28.114286 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.114600 kubelet[2674]: E1213 02:00:28.114322 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.115238 kubelet[2674]: E1213 02:00:28.115101 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.115238 kubelet[2674]: W1213 02:00:28.115122 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.115238 kubelet[2674]: E1213 02:00:28.115153 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.115631 kubelet[2674]: E1213 02:00:28.115604 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.115874 kubelet[2674]: W1213 02:00:28.115781 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.116074 kubelet[2674]: E1213 02:00:28.116035 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.116462 kubelet[2674]: E1213 02:00:28.116363 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.116462 kubelet[2674]: W1213 02:00:28.116376 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.116462 kubelet[2674]: E1213 02:00:28.116400 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.117471 kubelet[2674]: E1213 02:00:28.117364 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.117471 kubelet[2674]: W1213 02:00:28.117386 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.117471 kubelet[2674]: E1213 02:00:28.117408 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.117678 kubelet[2674]: E1213 02:00:28.117658 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.117678 kubelet[2674]: W1213 02:00:28.117669 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.117829 kubelet[2674]: E1213 02:00:28.117685 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.117984 kubelet[2674]: E1213 02:00:28.117968 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.117984 kubelet[2674]: W1213 02:00:28.117982 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.118377 kubelet[2674]: E1213 02:00:28.118121 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.118558 kubelet[2674]: E1213 02:00:28.118541 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.118621 kubelet[2674]: W1213 02:00:28.118558 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.118621 kubelet[2674]: E1213 02:00:28.118580 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.118836 kubelet[2674]: E1213 02:00:28.118823 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.118836 kubelet[2674]: W1213 02:00:28.118836 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.119055 kubelet[2674]: E1213 02:00:28.118852 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.119320 kubelet[2674]: E1213 02:00:28.119304 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.119609 kubelet[2674]: W1213 02:00:28.119369 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.119609 kubelet[2674]: E1213 02:00:28.119400 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.119872 kubelet[2674]: E1213 02:00:28.119850 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.119872 kubelet[2674]: W1213 02:00:28.119868 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.119971 kubelet[2674]: E1213 02:00:28.119882 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.120540 kubelet[2674]: E1213 02:00:28.120516 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:28.120630 kubelet[2674]: W1213 02:00:28.120617 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:28.120748 kubelet[2674]: E1213 02:00:28.120702 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:28.873773 kubelet[2674]: E1213 02:00:28.873715 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:28.975472 kubelet[2674]: I1213 02:00:28.975426 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:00:29.063754 kubelet[2674]: E1213 02:00:29.063709 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.063754 kubelet[2674]: W1213 02:00:29.063745 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.063754 kubelet[2674]: E1213 02:00:29.063775 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064101 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.064824 kubelet[2674]: W1213 02:00:29.064116 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064133 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064476 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.064824 kubelet[2674]: W1213 02:00:29.064493 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064511 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064788 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.064824 kubelet[2674]: W1213 02:00:29.064801 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.064824 kubelet[2674]: E1213 02:00:29.064817 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065103 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.065983 kubelet[2674]: W1213 02:00:29.065117 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065133 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065389 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.065983 kubelet[2674]: W1213 02:00:29.065402 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065417 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065866 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.065983 kubelet[2674]: W1213 02:00:29.065884 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.065983 kubelet[2674]: E1213 02:00:29.065902 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.066461 kubelet[2674]: E1213 02:00:29.066189 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.066461 kubelet[2674]: W1213 02:00:29.066204 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.066461 kubelet[2674]: E1213 02:00:29.066218 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.066677 kubelet[2674]: E1213 02:00:29.066537 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.066677 kubelet[2674]: W1213 02:00:29.066552 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.066677 kubelet[2674]: E1213 02:00:29.066567 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.066923 kubelet[2674]: E1213 02:00:29.066806 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.066923 kubelet[2674]: W1213 02:00:29.066822 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.066923 kubelet[2674]: E1213 02:00:29.066838 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.067198 kubelet[2674]: E1213 02:00:29.067087 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.067198 kubelet[2674]: W1213 02:00:29.067100 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.067198 kubelet[2674]: E1213 02:00:29.067115 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.067417 kubelet[2674]: E1213 02:00:29.067353 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.067417 kubelet[2674]: W1213 02:00:29.067366 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.067417 kubelet[2674]: E1213 02:00:29.067380 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.067812 kubelet[2674]: E1213 02:00:29.067790 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.067812 kubelet[2674]: W1213 02:00:29.067812 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.067812 kubelet[2674]: E1213 02:00:29.067828 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.068149 kubelet[2674]: E1213 02:00:29.068128 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.068222 kubelet[2674]: W1213 02:00:29.068150 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.068222 kubelet[2674]: E1213 02:00:29.068166 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.068441 kubelet[2674]: E1213 02:00:29.068413 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.068441 kubelet[2674]: W1213 02:00:29.068432 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.068540 kubelet[2674]: E1213 02:00:29.068446 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.120622 kubelet[2674]: E1213 02:00:29.120470 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.120622 kubelet[2674]: W1213 02:00:29.120498 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.120622 kubelet[2674]: E1213 02:00:29.120519 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.121052 kubelet[2674]: E1213 02:00:29.120894 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.121052 kubelet[2674]: W1213 02:00:29.120907 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.121052 kubelet[2674]: E1213 02:00:29.120933 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.121417 kubelet[2674]: E1213 02:00:29.121302 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.121417 kubelet[2674]: W1213 02:00:29.121314 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.121417 kubelet[2674]: E1213 02:00:29.121342 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.121829 kubelet[2674]: E1213 02:00:29.121717 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.121829 kubelet[2674]: W1213 02:00:29.121729 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.121829 kubelet[2674]: E1213 02:00:29.121746 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.122128 kubelet[2674]: E1213 02:00:29.121933 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.122128 kubelet[2674]: W1213 02:00:29.121942 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.122128 kubelet[2674]: E1213 02:00:29.121957 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.122311 kubelet[2674]: E1213 02:00:29.122267 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.122311 kubelet[2674]: W1213 02:00:29.122278 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.122372 kubelet[2674]: E1213 02:00:29.122304 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.122597 kubelet[2674]: E1213 02:00:29.122542 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.122597 kubelet[2674]: W1213 02:00:29.122557 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.122597 kubelet[2674]: E1213 02:00:29.122576 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.122904 kubelet[2674]: E1213 02:00:29.122819 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.122904 kubelet[2674]: W1213 02:00:29.122830 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.122904 kubelet[2674]: E1213 02:00:29.122850 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.123154 kubelet[2674]: E1213 02:00:29.123143 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.123284 kubelet[2674]: W1213 02:00:29.123205 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.123284 kubelet[2674]: E1213 02:00:29.123224 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.123457 kubelet[2674]: E1213 02:00:29.123440 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.123457 kubelet[2674]: W1213 02:00:29.123455 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.123510 kubelet[2674]: E1213 02:00:29.123467 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.123968 kubelet[2674]: E1213 02:00:29.123812 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.123968 kubelet[2674]: W1213 02:00:29.123822 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.123968 kubelet[2674]: E1213 02:00:29.123835 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.123998 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.125343 kubelet[2674]: W1213 02:00:29.124006 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.124042 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.124456 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.125343 kubelet[2674]: W1213 02:00:29.124467 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.124477 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.124647 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.125343 kubelet[2674]: W1213 02:00:29.124654 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.125343 kubelet[2674]: E1213 02:00:29.124664 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.126290 kubelet[2674]: E1213 02:00:29.126072 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.126290 kubelet[2674]: W1213 02:00:29.126111 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.126290 kubelet[2674]: E1213 02:00:29.126130 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.126614 kubelet[2674]: E1213 02:00:29.126401 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.126614 kubelet[2674]: W1213 02:00:29.126412 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.126614 kubelet[2674]: E1213 02:00:29.126525 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.127288 kubelet[2674]: E1213 02:00:29.127266 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.127455 kubelet[2674]: W1213 02:00:29.127379 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.127455 kubelet[2674]: E1213 02:00:29.127403 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:29.127978 kubelet[2674]: E1213 02:00:29.127963 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:00:29.128121 kubelet[2674]: W1213 02:00:29.128072 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:00:29.128121 kubelet[2674]: E1213 02:00:29.128091 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:00:30.875421 kubelet[2674]: E1213 02:00:30.874795 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:30.963057 containerd[1476]: time="2024-12-13T02:00:30.962163201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:30.963057 containerd[1476]: time="2024-12-13T02:00:30.963028610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 02:00:30.965043 containerd[1476]: time="2024-12-13T02:00:30.964042981Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:30.966187 containerd[1476]: time="2024-12-13T02:00:30.966143884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:30.966803 containerd[1476]: time="2024-12-13T02:00:30.966776810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 3.375873683s" Dec 13 02:00:30.966964 containerd[1476]: time="2024-12-13T02:00:30.966879571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 02:00:30.969269 containerd[1476]: time="2024-12-13T02:00:30.969242517Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:00:30.988318 containerd[1476]: time="2024-12-13T02:00:30.988270199Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af\"" Dec 13 02:00:30.989764 containerd[1476]: time="2024-12-13T02:00:30.989693414Z" level=info msg="StartContainer for \"20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af\"" Dec 13 02:00:31.026271 systemd[1]: Started cri-containerd-20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af.scope - libcontainer container 20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af. Dec 13 02:00:31.056467 containerd[1476]: time="2024-12-13T02:00:31.056319669Z" level=info msg="StartContainer for \"20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af\" returns successfully" Dec 13 02:00:31.090860 systemd[1]: cri-containerd-20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af.scope: Deactivated successfully. Dec 13 02:00:31.114630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af-rootfs.mount: Deactivated successfully. Dec 13 02:00:31.195801 containerd[1476]: time="2024-12-13T02:00:31.195382056Z" level=info msg="shim disconnected" id=20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af namespace=k8s.io Dec 13 02:00:31.195801 containerd[1476]: time="2024-12-13T02:00:31.195539418Z" level=warning msg="cleaning up after shim disconnected" id=20fbee31115ab1759682bcd491bacf5228b8d393e621f0532aa00e2ce5d8e7af namespace=k8s.io Dec 13 02:00:31.195801 containerd[1476]: time="2024-12-13T02:00:31.195551978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:00:31.992136 containerd[1476]: time="2024-12-13T02:00:31.992082916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:00:32.874665 kubelet[2674]: E1213 02:00:32.874268 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:34.874342 kubelet[2674]: E1213 02:00:34.874213 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:36.874325 kubelet[2674]: E1213 02:00:36.874268 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:37.896737 containerd[1476]: time="2024-12-13T02:00:37.896688545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:37.898721 containerd[1476]: time="2024-12-13T02:00:37.898680093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 02:00:37.900668 containerd[1476]: time="2024-12-13T02:00:37.899678146Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:37.902541 containerd[1476]: time="2024-12-13T02:00:37.902508705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:37.903176 containerd[1476]: time="2024-12-13T02:00:37.903131834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.910137548s" Dec 13 02:00:37.903239 containerd[1476]: time="2024-12-13T02:00:37.903180514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 02:00:37.907424 containerd[1476]: time="2024-12-13T02:00:37.907391732Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:00:37.920828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260738331.mount: Deactivated successfully. Dec 13 02:00:37.926772 containerd[1476]: time="2024-12-13T02:00:37.924625209Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6\"" Dec 13 02:00:37.926772 containerd[1476]: time="2024-12-13T02:00:37.925345139Z" level=info msg="StartContainer for \"892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6\"" Dec 13 02:00:37.959341 systemd[1]: Started cri-containerd-892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6.scope - libcontainer container 892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6. Dec 13 02:00:37.994670 containerd[1476]: time="2024-12-13T02:00:37.994600691Z" level=info msg="StartContainer for \"892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6\" returns successfully" Dec 13 02:00:38.529764 systemd[1]: cri-containerd-892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6.scope: Deactivated successfully. Dec 13 02:00:38.556774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6-rootfs.mount: Deactivated successfully. Dec 13 02:00:38.626683 kubelet[2674]: I1213 02:00:38.626530 2674 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:00:38.659738 containerd[1476]: time="2024-12-13T02:00:38.659455969Z" level=info msg="shim disconnected" id=892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6 namespace=k8s.io Dec 13 02:00:38.659738 containerd[1476]: time="2024-12-13T02:00:38.659532290Z" level=warning msg="cleaning up after shim disconnected" id=892117b4752abc03e75c6a24a9786310a6f717fe72caac92e0bca770cf1832f6 namespace=k8s.io Dec 13 02:00:38.659738 containerd[1476]: time="2024-12-13T02:00:38.659555651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:00:38.683424 systemd[1]: Created slice kubepods-burstable-pode913ba99_51c8_4660_80f1_d499d5133b83.slice - libcontainer container kubepods-burstable-pode913ba99_51c8_4660_80f1_d499d5133b83.slice. Dec 13 02:00:38.695451 kubelet[2674]: W1213 02:00:38.695251 2674 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-4-277531bf34" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-2-1-4-277531bf34' and this object Dec 13 02:00:38.695451 kubelet[2674]: E1213 02:00:38.695387 2674 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-2-1-4-277531bf34\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-2-1-4-277531bf34' and this object" logger="UnhandledError" Dec 13 02:00:38.696493 kubelet[2674]: W1213 02:00:38.695866 2674 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-2-1-4-277531bf34" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-2-1-4-277531bf34' and this object Dec 13 02:00:38.696493 kubelet[2674]: E1213 02:00:38.695897 2674 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081-2-1-4-277531bf34\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-2-1-4-277531bf34' and this object" logger="UnhandledError" Dec 13 02:00:38.704876 systemd[1]: Created slice kubepods-burstable-pod3d48b786_1c23_46ba_b027_707a91594565.slice - libcontainer container kubepods-burstable-pod3d48b786_1c23_46ba_b027_707a91594565.slice. Dec 13 02:00:38.713904 systemd[1]: Created slice kubepods-besteffort-poda78fb6a3_ca79_4137_a8f3_a349aa537780.slice - libcontainer container kubepods-besteffort-poda78fb6a3_ca79_4137_a8f3_a349aa537780.slice. Dec 13 02:00:38.724295 systemd[1]: Created slice kubepods-besteffort-podb0f89edf_f3cb_4a33_b829_5564e6aeb598.slice - libcontainer container kubepods-besteffort-podb0f89edf_f3cb_4a33_b829_5564e6aeb598.slice. Dec 13 02:00:38.732327 systemd[1]: Created slice kubepods-besteffort-pod7522399d_902f_4e35_8d1b_67a7dba93ba9.slice - libcontainer container kubepods-besteffort-pod7522399d_902f_4e35_8d1b_67a7dba93ba9.slice. Dec 13 02:00:38.790351 kubelet[2674]: I1213 02:00:38.790035 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gbk\" (UniqueName: \"kubernetes.io/projected/e913ba99-51c8-4660-80f1-d499d5133b83-kube-api-access-99gbk\") pod \"coredns-6f6b679f8f-xlsvx\" (UID: \"e913ba99-51c8-4660-80f1-d499d5133b83\") " pod="kube-system/coredns-6f6b679f8f-xlsvx" Dec 13 02:00:38.790351 kubelet[2674]: I1213 02:00:38.790112 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkth6\" (UniqueName: \"kubernetes.io/projected/7522399d-902f-4e35-8d1b-67a7dba93ba9-kube-api-access-fkth6\") pod \"calico-apiserver-f8df5958f-fzbtl\" (UID: \"7522399d-902f-4e35-8d1b-67a7dba93ba9\") " pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" Dec 13 02:00:38.790351 kubelet[2674]: I1213 02:00:38.790148 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d48b786-1c23-46ba-b027-707a91594565-config-volume\") pod \"coredns-6f6b679f8f-f5bh8\" (UID: \"3d48b786-1c23-46ba-b027-707a91594565\") " pod="kube-system/coredns-6f6b679f8f-f5bh8" Dec 13 02:00:38.790351 kubelet[2674]: I1213 02:00:38.790181 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f89edf-f3cb-4a33-b829-5564e6aeb598-tigera-ca-bundle\") pod \"calico-kube-controllers-6685777db6-ng5ph\" (UID: \"b0f89edf-f3cb-4a33-b829-5564e6aeb598\") " pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" Dec 13 02:00:38.790351 kubelet[2674]: I1213 02:00:38.790218 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znc5z\" (UniqueName: \"kubernetes.io/projected/3d48b786-1c23-46ba-b027-707a91594565-kube-api-access-znc5z\") pod \"coredns-6f6b679f8f-f5bh8\" (UID: \"3d48b786-1c23-46ba-b027-707a91594565\") " pod="kube-system/coredns-6f6b679f8f-f5bh8" Dec 13 02:00:38.790720 kubelet[2674]: I1213 02:00:38.790251 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zgcj\" (UniqueName: \"kubernetes.io/projected/a78fb6a3-ca79-4137-a8f3-a349aa537780-kube-api-access-6zgcj\") pod \"calico-apiserver-f8df5958f-q5lr8\" (UID: \"a78fb6a3-ca79-4137-a8f3-a349aa537780\") " pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" Dec 13 02:00:38.790720 kubelet[2674]: I1213 02:00:38.790293 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg6xb\" (UniqueName: \"kubernetes.io/projected/b0f89edf-f3cb-4a33-b829-5564e6aeb598-kube-api-access-sg6xb\") pod \"calico-kube-controllers-6685777db6-ng5ph\" (UID: \"b0f89edf-f3cb-4a33-b829-5564e6aeb598\") " pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" Dec 13 02:00:38.790720 kubelet[2674]: I1213 02:00:38.790330 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7522399d-902f-4e35-8d1b-67a7dba93ba9-calico-apiserver-certs\") pod \"calico-apiserver-f8df5958f-fzbtl\" (UID: \"7522399d-902f-4e35-8d1b-67a7dba93ba9\") " pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" Dec 13 02:00:38.790720 kubelet[2674]: I1213 02:00:38.790368 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e913ba99-51c8-4660-80f1-d499d5133b83-config-volume\") pod \"coredns-6f6b679f8f-xlsvx\" (UID: \"e913ba99-51c8-4660-80f1-d499d5133b83\") " pod="kube-system/coredns-6f6b679f8f-xlsvx" Dec 13 02:00:38.790720 kubelet[2674]: I1213 02:00:38.790397 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a78fb6a3-ca79-4137-a8f3-a349aa537780-calico-apiserver-certs\") pod \"calico-apiserver-f8df5958f-q5lr8\" (UID: \"a78fb6a3-ca79-4137-a8f3-a349aa537780\") " pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" Dec 13 02:00:38.883364 systemd[1]: Created slice kubepods-besteffort-podc72753a3_f6db_44d3_aefe_df577750df39.slice - libcontainer container kubepods-besteffort-podc72753a3_f6db_44d3_aefe_df577750df39.slice. Dec 13 02:00:38.886141 containerd[1476]: time="2024-12-13T02:00:38.886093374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb9nr,Uid:c72753a3-f6db-44d3-aefe-df577750df39,Namespace:calico-system,Attempt:0,}" Dec 13 02:00:38.998070 containerd[1476]: time="2024-12-13T02:00:38.998031516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlsvx,Uid:e913ba99-51c8-4660-80f1-d499d5133b83,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:39.013638 containerd[1476]: time="2024-12-13T02:00:39.013593301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5bh8,Uid:3d48b786-1c23-46ba-b027-707a91594565,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:39.018131 containerd[1476]: time="2024-12-13T02:00:39.018096807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:00:39.031673 containerd[1476]: time="2024-12-13T02:00:39.031622083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6685777db6-ng5ph,Uid:b0f89edf-f3cb-4a33-b829-5564e6aeb598,Namespace:calico-system,Attempt:0,}" Dec 13 02:00:39.124898 containerd[1476]: time="2024-12-13T02:00:39.124838276Z" level=error msg="Failed to destroy network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.125543 containerd[1476]: time="2024-12-13T02:00:39.125496326Z" level=error msg="encountered an error cleaning up failed sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.125739 containerd[1476]: time="2024-12-13T02:00:39.125683849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb9nr,Uid:c72753a3-f6db-44d3-aefe-df577750df39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.126177 kubelet[2674]: E1213 02:00:39.125996 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.126177 kubelet[2674]: E1213 02:00:39.126131 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:39.126177 kubelet[2674]: E1213 02:00:39.126153 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mb9nr" Dec 13 02:00:39.126309 kubelet[2674]: E1213 02:00:39.126198 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mb9nr_calico-system(c72753a3-f6db-44d3-aefe-df577750df39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mb9nr_calico-system(c72753a3-f6db-44d3-aefe-df577750df39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:39.160515 containerd[1476]: time="2024-12-13T02:00:39.158088519Z" level=error msg="Failed to destroy network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.160515 containerd[1476]: time="2024-12-13T02:00:39.158447004Z" level=error msg="encountered an error cleaning up failed sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.160515 containerd[1476]: time="2024-12-13T02:00:39.158514245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6685777db6-ng5ph,Uid:b0f89edf-f3cb-4a33-b829-5564e6aeb598,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.160745 kubelet[2674]: E1213 02:00:39.158757 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.160745 kubelet[2674]: E1213 02:00:39.158818 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" Dec 13 02:00:39.160745 kubelet[2674]: E1213 02:00:39.158842 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" Dec 13 02:00:39.160874 kubelet[2674]: E1213 02:00:39.158879 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6685777db6-ng5ph_calico-system(b0f89edf-f3cb-4a33-b829-5564e6aeb598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6685777db6-ng5ph_calico-system(b0f89edf-f3cb-4a33-b829-5564e6aeb598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" podUID="b0f89edf-f3cb-4a33-b829-5564e6aeb598" Dec 13 02:00:39.168404 containerd[1476]: time="2024-12-13T02:00:39.168159785Z" level=error msg="Failed to destroy network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.168543 containerd[1476]: time="2024-12-13T02:00:39.168481350Z" level=error msg="encountered an error cleaning up failed sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.168543 containerd[1476]: time="2024-12-13T02:00:39.168528991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlsvx,Uid:e913ba99-51c8-4660-80f1-d499d5133b83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.168834 kubelet[2674]: E1213 02:00:39.168728 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.168834 kubelet[2674]: E1213 02:00:39.168789 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xlsvx" Dec 13 02:00:39.168834 kubelet[2674]: E1213 02:00:39.168808 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xlsvx" Dec 13 02:00:39.168961 kubelet[2674]: E1213 02:00:39.168847 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xlsvx_kube-system(e913ba99-51c8-4660-80f1-d499d5133b83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xlsvx_kube-system(e913ba99-51c8-4660-80f1-d499d5133b83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xlsvx" podUID="e913ba99-51c8-4660-80f1-d499d5133b83" Dec 13 02:00:39.177299 containerd[1476]: time="2024-12-13T02:00:39.177241037Z" level=error msg="Failed to destroy network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.177654 containerd[1476]: time="2024-12-13T02:00:39.177623083Z" level=error msg="encountered an error cleaning up failed sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.177707 containerd[1476]: time="2024-12-13T02:00:39.177684963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5bh8,Uid:3d48b786-1c23-46ba-b027-707a91594565,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.177937 kubelet[2674]: E1213 02:00:39.177890 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:39.177989 kubelet[2674]: E1213 02:00:39.177960 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f5bh8" Dec 13 02:00:39.178043 kubelet[2674]: E1213 02:00:39.177980 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f5bh8" Dec 13 02:00:39.178086 kubelet[2674]: E1213 02:00:39.178056 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f5bh8_kube-system(3d48b786-1c23-46ba-b027-707a91594565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f5bh8_kube-system(3d48b786-1c23-46ba-b027-707a91594565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f5bh8" podUID="3d48b786-1c23-46ba-b027-707a91594565" Dec 13 02:00:39.242231 kubelet[2674]: I1213 02:00:39.242125 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:00:39.924899 containerd[1476]: time="2024-12-13T02:00:39.924296643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-q5lr8,Uid:a78fb6a3-ca79-4137-a8f3-a349aa537780,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:00:39.929632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b-shm.mount: Deactivated successfully. Dec 13 02:00:39.930111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc-shm.mount: Deactivated successfully. Dec 13 02:00:39.937346 containerd[1476]: time="2024-12-13T02:00:39.937103309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-fzbtl,Uid:7522399d-902f-4e35-8d1b-67a7dba93ba9,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:00:40.013092 containerd[1476]: time="2024-12-13T02:00:40.010460577Z" level=error msg="Failed to destroy network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.012868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863-shm.mount: Deactivated successfully. Dec 13 02:00:40.013812 containerd[1476]: time="2024-12-13T02:00:40.013772307Z" level=error msg="encountered an error cleaning up failed sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.015138 containerd[1476]: time="2024-12-13T02:00:40.015081446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-q5lr8,Uid:a78fb6a3-ca79-4137-a8f3-a349aa537780,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.015790 kubelet[2674]: E1213 02:00:40.015282 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.015790 kubelet[2674]: E1213 02:00:40.015340 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" Dec 13 02:00:40.015790 kubelet[2674]: E1213 02:00:40.015359 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" Dec 13 02:00:40.017802 kubelet[2674]: E1213 02:00:40.015394 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8df5958f-q5lr8_calico-apiserver(a78fb6a3-ca79-4137-a8f3-a349aa537780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8df5958f-q5lr8_calico-apiserver(a78fb6a3-ca79-4137-a8f3-a349aa537780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" podUID="a78fb6a3-ca79-4137-a8f3-a349aa537780" Dec 13 02:00:40.020252 kubelet[2674]: I1213 02:00:40.020226 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:40.021235 containerd[1476]: time="2024-12-13T02:00:40.021193617Z" level=info msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" Dec 13 02:00:40.021395 containerd[1476]: time="2024-12-13T02:00:40.021366900Z" level=info msg="Ensure that sandbox 50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863 in task-service has been cleanup successfully" Dec 13 02:00:40.023609 kubelet[2674]: I1213 02:00:40.023583 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:40.025673 containerd[1476]: time="2024-12-13T02:00:40.025628723Z" level=info msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" Dec 13 02:00:40.025848 containerd[1476]: time="2024-12-13T02:00:40.025800726Z" level=info msg="Ensure that sandbox a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e in task-service has been cleanup successfully" Dec 13 02:00:40.026785 kubelet[2674]: I1213 02:00:40.026761 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:40.027813 containerd[1476]: time="2024-12-13T02:00:40.027776235Z" level=info msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" Dec 13 02:00:40.028112 containerd[1476]: time="2024-12-13T02:00:40.028091400Z" level=info msg="Ensure that sandbox 73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc in task-service has been cleanup successfully" Dec 13 02:00:40.034657 kubelet[2674]: I1213 02:00:40.034630 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:40.037222 containerd[1476]: time="2024-12-13T02:00:40.037187335Z" level=info msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" Dec 13 02:00:40.037960 containerd[1476]: time="2024-12-13T02:00:40.037932906Z" level=info msg="Ensure that sandbox d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919 in task-service has been cleanup successfully" Dec 13 02:00:40.042081 kubelet[2674]: I1213 02:00:40.041012 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:40.043145 containerd[1476]: time="2024-12-13T02:00:40.042837619Z" level=info msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" Dec 13 02:00:40.043655 containerd[1476]: time="2024-12-13T02:00:40.043628911Z" level=info msg="Ensure that sandbox 1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b in task-service has been cleanup successfully" Dec 13 02:00:40.096419 containerd[1476]: time="2024-12-13T02:00:40.096369176Z" level=error msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" failed" error="failed to destroy network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.097048 kubelet[2674]: E1213 02:00:40.096857 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:40.097048 kubelet[2674]: E1213 02:00:40.096926 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863"} Dec 13 02:00:40.097048 kubelet[2674]: E1213 02:00:40.096979 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a78fb6a3-ca79-4137-a8f3-a349aa537780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:40.097048 kubelet[2674]: E1213 02:00:40.097001 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a78fb6a3-ca79-4137-a8f3-a349aa537780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" podUID="a78fb6a3-ca79-4137-a8f3-a349aa537780" Dec 13 02:00:40.099205 containerd[1476]: time="2024-12-13T02:00:40.099154658Z" level=error msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" failed" error="failed to destroy network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.100079 kubelet[2674]: E1213 02:00:40.099364 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:40.100079 kubelet[2674]: E1213 02:00:40.099410 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e"} Dec 13 02:00:40.100079 kubelet[2674]: E1213 02:00:40.099439 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0f89edf-f3cb-4a33-b829-5564e6aeb598\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:40.100079 kubelet[2674]: E1213 02:00:40.099472 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0f89edf-f3cb-4a33-b829-5564e6aeb598\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" podUID="b0f89edf-f3cb-4a33-b829-5564e6aeb598" Dec 13 02:00:40.112244 containerd[1476]: time="2024-12-13T02:00:40.112192252Z" level=error msg="Failed to destroy network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.113220 containerd[1476]: time="2024-12-13T02:00:40.113176186Z" level=error msg="encountered an error cleaning up failed sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.113294 containerd[1476]: time="2024-12-13T02:00:40.113247827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-fzbtl,Uid:7522399d-902f-4e35-8d1b-67a7dba93ba9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.113899 kubelet[2674]: E1213 02:00:40.113562 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.113899 kubelet[2674]: E1213 02:00:40.113619 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" Dec 13 02:00:40.113899 kubelet[2674]: E1213 02:00:40.113642 2674 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" Dec 13 02:00:40.114151 kubelet[2674]: E1213 02:00:40.113680 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8df5958f-fzbtl_calico-apiserver(7522399d-902f-4e35-8d1b-67a7dba93ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8df5958f-fzbtl_calico-apiserver(7522399d-902f-4e35-8d1b-67a7dba93ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" podUID="7522399d-902f-4e35-8d1b-67a7dba93ba9" Dec 13 02:00:40.126176 containerd[1476]: time="2024-12-13T02:00:40.125896936Z" level=error msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" failed" error="failed to destroy network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.127639 kubelet[2674]: E1213 02:00:40.127480 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:40.127639 kubelet[2674]: E1213 02:00:40.127542 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b"} Dec 13 02:00:40.127639 kubelet[2674]: E1213 02:00:40.127576 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e913ba99-51c8-4660-80f1-d499d5133b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:40.127639 kubelet[2674]: E1213 02:00:40.127604 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e913ba99-51c8-4660-80f1-d499d5133b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xlsvx" podUID="e913ba99-51c8-4660-80f1-d499d5133b83" Dec 13 02:00:40.135205 containerd[1476]: time="2024-12-13T02:00:40.134735027Z" level=error msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" failed" error="failed to destroy network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.135359 kubelet[2674]: E1213 02:00:40.134977 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:40.135359 kubelet[2674]: E1213 02:00:40.135060 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc"} Dec 13 02:00:40.135359 kubelet[2674]: E1213 02:00:40.135093 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c72753a3-f6db-44d3-aefe-df577750df39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:40.135359 kubelet[2674]: E1213 02:00:40.135127 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c72753a3-f6db-44d3-aefe-df577750df39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mb9nr" podUID="c72753a3-f6db-44d3-aefe-df577750df39" Dec 13 02:00:40.138859 containerd[1476]: time="2024-12-13T02:00:40.138190319Z" level=error msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" failed" error="failed to destroy network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:40.138995 kubelet[2674]: E1213 02:00:40.138684 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:40.138995 kubelet[2674]: E1213 02:00:40.138729 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919"} Dec 13 02:00:40.138995 kubelet[2674]: E1213 02:00:40.138761 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d48b786-1c23-46ba-b027-707a91594565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:40.138995 kubelet[2674]: E1213 02:00:40.138799 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d48b786-1c23-46ba-b027-707a91594565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f5bh8" podUID="3d48b786-1c23-46ba-b027-707a91594565" Dec 13 02:00:40.920809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52-shm.mount: Deactivated successfully. Dec 13 02:00:41.044972 kubelet[2674]: I1213 02:00:41.044933 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:41.046493 containerd[1476]: time="2024-12-13T02:00:41.046449614Z" level=info msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" Dec 13 02:00:41.047027 containerd[1476]: time="2024-12-13T02:00:41.046717218Z" level=info msg="Ensure that sandbox 6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52 in task-service has been cleanup successfully" Dec 13 02:00:41.082410 containerd[1476]: time="2024-12-13T02:00:41.082290081Z" level=error msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" failed" error="failed to destroy network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:00:41.082630 kubelet[2674]: E1213 02:00:41.082558 2674 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:41.082680 kubelet[2674]: E1213 02:00:41.082616 2674 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52"} Dec 13 02:00:41.082680 kubelet[2674]: E1213 02:00:41.082669 2674 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7522399d-902f-4e35-8d1b-67a7dba93ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:00:41.082751 kubelet[2674]: E1213 02:00:41.082693 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7522399d-902f-4e35-8d1b-67a7dba93ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" podUID="7522399d-902f-4e35-8d1b-67a7dba93ba9" Dec 13 02:00:45.366572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542458363.mount: Deactivated successfully. Dec 13 02:00:45.400788 containerd[1476]: time="2024-12-13T02:00:45.399784912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:45.401369 containerd[1476]: time="2024-12-13T02:00:45.401178655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 02:00:45.402169 containerd[1476]: time="2024-12-13T02:00:45.402124790Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:45.405318 containerd[1476]: time="2024-12-13T02:00:45.405250842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:45.405750 containerd[1476]: time="2024-12-13T02:00:45.405695170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.387367919s" Dec 13 02:00:45.405750 containerd[1476]: time="2024-12-13T02:00:45.405740570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 02:00:45.426955 containerd[1476]: time="2024-12-13T02:00:45.426913481Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:00:45.445797 containerd[1476]: time="2024-12-13T02:00:45.445736072Z" level=info msg="CreateContainer within sandbox \"c66cf9b8eb809277c48cd2ce9650af4e4a2f5aefcbc8ab12706f91b1ad92fb3a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5\"" Dec 13 02:00:45.448197 containerd[1476]: time="2024-12-13T02:00:45.448154672Z" level=info msg="StartContainer for \"744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5\"" Dec 13 02:00:45.490255 systemd[1]: Started cri-containerd-744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5.scope - libcontainer container 744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5. Dec 13 02:00:45.535180 containerd[1476]: time="2024-12-13T02:00:45.534405100Z" level=info msg="StartContainer for \"744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5\" returns successfully" Dec 13 02:00:45.696292 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:00:45.696417 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:00:46.093644 kubelet[2674]: I1213 02:00:46.093180 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kbxcz" podStartSLOduration=1.967498447 podStartE2EDuration="22.093146859s" podCreationTimestamp="2024-12-13 02:00:24 +0000 UTC" firstStartedPulling="2024-12-13 02:00:25.28200015 +0000 UTC m=+17.549566340" lastFinishedPulling="2024-12-13 02:00:45.407648482 +0000 UTC m=+37.675214752" observedRunningTime="2024-12-13 02:00:46.090423173 +0000 UTC m=+38.357989403" watchObservedRunningTime="2024-12-13 02:00:46.093146859 +0000 UTC m=+38.360713049" Dec 13 02:00:47.345084 kernel: bpftool[3982]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:00:47.582466 systemd-networkd[1368]: vxlan.calico: Link UP Dec 13 02:00:47.582474 systemd-networkd[1368]: vxlan.calico: Gained carrier Dec 13 02:00:49.169459 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Dec 13 02:00:50.875151 containerd[1476]: time="2024-12-13T02:00:50.874653617Z" level=info msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.960 [INFO][4085] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.961 [INFO][4085] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" iface="eth0" netns="/var/run/netns/cni-36032c2a-bbab-62cf-4ebd-0684478fa6d5" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.962 [INFO][4085] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" iface="eth0" netns="/var/run/netns/cni-36032c2a-bbab-62cf-4ebd-0684478fa6d5" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.963 [INFO][4085] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" iface="eth0" netns="/var/run/netns/cni-36032c2a-bbab-62cf-4ebd-0684478fa6d5" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.963 [INFO][4085] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:50.963 [INFO][4085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.027 [INFO][4091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.027 [INFO][4091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.027 [INFO][4091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.036 [WARNING][4091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.037 [INFO][4091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.038 [INFO][4091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:51.043941 containerd[1476]: 2024-12-13 02:00:51.040 [INFO][4085] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:00:51.047070 containerd[1476]: time="2024-12-13T02:00:51.044426881Z" level=info msg="TearDown network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" successfully" Dec 13 02:00:51.047070 containerd[1476]: time="2024-12-13T02:00:51.044479122Z" level=info msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" returns successfully" Dec 13 02:00:51.044794 systemd[1]: run-netns-cni\x2d36032c2a\x2dbbab\x2d62cf\x2d4ebd\x2d0684478fa6d5.mount: Deactivated successfully. Dec 13 02:00:51.048798 containerd[1476]: time="2024-12-13T02:00:51.048408833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-q5lr8,Uid:a78fb6a3-ca79-4137-a8f3-a349aa537780,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:00:51.214742 systemd-networkd[1368]: calif9757d3b133: Link UP Dec 13 02:00:51.214915 systemd-networkd[1368]: calif9757d3b133: Gained carrier Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.115 [INFO][4099] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0 calico-apiserver-f8df5958f- calico-apiserver a78fb6a3-ca79-4137-a8f3-a349aa537780 742 0 2024-12-13 02:00:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8df5958f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 calico-apiserver-f8df5958f-q5lr8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif9757d3b133 [] []}} ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.115 [INFO][4099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.158 [INFO][4109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" HandleID="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.179 [INFO][4109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" HandleID="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000223820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-4-277531bf34", "pod":"calico-apiserver-f8df5958f-q5lr8", "timestamp":"2024-12-13 02:00:51.158370039 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.179 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.179 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.179 [INFO][4109] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.182 [INFO][4109] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.186 [INFO][4109] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.191 [INFO][4109] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.193 [INFO][4109] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.196 [INFO][4109] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.196 [INFO][4109] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.198 [INFO][4109] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6 Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.202 [INFO][4109] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.209 [INFO][4109] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.65/26] block=192.168.13.64/26 handle="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.209 [INFO][4109] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.65/26] handle="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.209 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:51.236230 containerd[1476]: 2024-12-13 02:00:51.209 [INFO][4109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.65/26] IPv6=[] ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" HandleID="k8s-pod-network.19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.211 [INFO][4099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78fb6a3-ca79-4137-a8f3-a349aa537780", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"calico-apiserver-f8df5958f-q5lr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9757d3b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.211 [INFO][4099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.65/32] ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.211 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9757d3b133 ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.214 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.215 [INFO][4099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78fb6a3-ca79-4137-a8f3-a349aa537780", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6", Pod:"calico-apiserver-f8df5958f-q5lr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9757d3b133", MAC:"2a:f2:ec:49:67:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:51.236915 containerd[1476]: 2024-12-13 02:00:51.230 [INFO][4099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-q5lr8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:00:51.266851 containerd[1476]: time="2024-12-13T02:00:51.266097204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:51.266851 containerd[1476]: time="2024-12-13T02:00:51.266157205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:51.266851 containerd[1476]: time="2024-12-13T02:00:51.266172165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:51.266851 containerd[1476]: time="2024-12-13T02:00:51.266247487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:51.292283 systemd[1]: Started cri-containerd-19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6.scope - libcontainer container 19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6. Dec 13 02:00:51.341452 containerd[1476]: time="2024-12-13T02:00:51.341411178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-q5lr8,Uid:a78fb6a3-ca79-4137-a8f3-a349aa537780,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6\"" Dec 13 02:00:51.343390 containerd[1476]: time="2024-12-13T02:00:51.343330293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:00:51.878248 containerd[1476]: time="2024-12-13T02:00:51.878174568Z" level=info msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.939 [INFO][4184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.940 [INFO][4184] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" iface="eth0" netns="/var/run/netns/cni-68e4b66f-536f-c527-e92b-349e5e506bc8" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.940 [INFO][4184] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" iface="eth0" netns="/var/run/netns/cni-68e4b66f-536f-c527-e92b-349e5e506bc8" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.942 [INFO][4184] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" iface="eth0" netns="/var/run/netns/cni-68e4b66f-536f-c527-e92b-349e5e506bc8" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.942 [INFO][4184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.942 [INFO][4184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.968 [INFO][4190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.969 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.969 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.984 [WARNING][4190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.984 [INFO][4190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.986 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:51.991175 containerd[1476]: 2024-12-13 02:00:51.988 [INFO][4184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:00:51.992070 containerd[1476]: time="2024-12-13T02:00:51.991879922Z" level=info msg="TearDown network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" successfully" Dec 13 02:00:51.992070 containerd[1476]: time="2024-12-13T02:00:51.991912843Z" level=info msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" returns successfully" Dec 13 02:00:51.992625 containerd[1476]: time="2024-12-13T02:00:51.992596495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb9nr,Uid:c72753a3-f6db-44d3-aefe-df577750df39,Namespace:calico-system,Attempt:1,}" Dec 13 02:00:52.055216 systemd[1]: run-netns-cni\x2d68e4b66f\x2d536f\x2dc527\x2de92b\x2d349e5e506bc8.mount: Deactivated successfully. Dec 13 02:00:52.145317 systemd-networkd[1368]: cali582755c968a: Link UP Dec 13 02:00:52.145477 systemd-networkd[1368]: cali582755c968a: Gained carrier Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.049 [INFO][4197] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0 csi-node-driver- calico-system c72753a3-f6db-44d3-aefe-df577750df39 750 0 2024-12-13 02:00:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 csi-node-driver-mb9nr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali582755c968a [] []}} ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.049 [INFO][4197] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.088 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" HandleID="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.103 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" HandleID="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-4-277531bf34", "pod":"csi-node-driver-mb9nr", "timestamp":"2024-12-13 02:00:52.088069258 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.103 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.103 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.103 [INFO][4207] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.106 [INFO][4207] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.111 [INFO][4207] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.117 [INFO][4207] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.120 [INFO][4207] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.124 [INFO][4207] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.124 [INFO][4207] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.126 [INFO][4207] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9 Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.131 [INFO][4207] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.139 [INFO][4207] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.66/26] block=192.168.13.64/26 handle="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.139 [INFO][4207] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.66/26] handle="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.139 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:52.162651 containerd[1476]: 2024-12-13 02:00:52.139 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.66/26] IPv6=[] ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" HandleID="k8s-pod-network.527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.141 [INFO][4197] cni-plugin/k8s.go 386: Populated endpoint ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c72753a3-f6db-44d3-aefe-df577750df39", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"csi-node-driver-mb9nr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali582755c968a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.141 [INFO][4197] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.66/32] ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.141 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali582755c968a ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.145 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.146 [INFO][4197] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c72753a3-f6db-44d3-aefe-df577750df39", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9", Pod:"csi-node-driver-mb9nr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali582755c968a", MAC:"e6:d1:f0:45:ad:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:52.165034 containerd[1476]: 2024-12-13 02:00:52.159 [INFO][4197] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9" Namespace="calico-system" Pod="csi-node-driver-mb9nr" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:00:52.191757 containerd[1476]: time="2024-12-13T02:00:52.191651254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:52.191757 containerd[1476]: time="2024-12-13T02:00:52.191706415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:52.191757 containerd[1476]: time="2024-12-13T02:00:52.191722015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:52.192141 containerd[1476]: time="2024-12-13T02:00:52.191795816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:52.218196 systemd[1]: Started cri-containerd-527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9.scope - libcontainer container 527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9. Dec 13 02:00:52.244435 containerd[1476]: time="2024-12-13T02:00:52.244341148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb9nr,Uid:c72753a3-f6db-44d3-aefe-df577750df39,Namespace:calico-system,Attempt:1,} returns sandbox id \"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9\"" Dec 13 02:00:52.689604 systemd-networkd[1368]: calif9757d3b133: Gained IPv6LL Dec 13 02:00:52.875595 containerd[1476]: time="2024-12-13T02:00:52.875076851Z" level=info msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.932 [INFO][4284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.932 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" iface="eth0" netns="/var/run/netns/cni-30fda51a-0542-aaa7-0702-1a60e4645e82" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.933 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" iface="eth0" netns="/var/run/netns/cni-30fda51a-0542-aaa7-0702-1a60e4645e82" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.933 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" iface="eth0" netns="/var/run/netns/cni-30fda51a-0542-aaa7-0702-1a60e4645e82" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.933 [INFO][4284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.933 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.963 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.963 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.963 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.976 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.976 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.977 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:52.980857 containerd[1476]: 2024-12-13 02:00:52.979 [INFO][4284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:00:52.981433 containerd[1476]: time="2024-12-13T02:00:52.981068931Z" level=info msg="TearDown network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" successfully" Dec 13 02:00:52.981433 containerd[1476]: time="2024-12-13T02:00:52.981104771Z" level=info msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" returns successfully" Dec 13 02:00:52.983642 systemd[1]: run-netns-cni\x2d30fda51a\x2d0542\x2daaa7\x2d0702\x2d1a60e4645e82.mount: Deactivated successfully. Dec 13 02:00:52.992640 containerd[1476]: time="2024-12-13T02:00:52.992569663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5bh8,Uid:3d48b786-1c23-46ba-b027-707a91594565,Namespace:kube-system,Attempt:1,}" Dec 13 02:00:53.154404 systemd-networkd[1368]: cali00a70c4c76b: Link UP Dec 13 02:00:53.155924 systemd-networkd[1368]: cali00a70c4c76b: Gained carrier Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.053 [INFO][4300] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0 coredns-6f6b679f8f- kube-system 3d48b786-1c23-46ba-b027-707a91594565 757 0 2024-12-13 02:00:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 coredns-6f6b679f8f-f5bh8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00a70c4c76b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.053 [INFO][4300] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.084 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" HandleID="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.104 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" HandleID="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-4-277531bf34", "pod":"coredns-6f6b679f8f-f5bh8", "timestamp":"2024-12-13 02:00:53.084706867 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.105 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.105 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.105 [INFO][4308] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.108 [INFO][4308] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.114 [INFO][4308] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.125 [INFO][4308] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.127 [INFO][4308] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.130 [INFO][4308] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.130 [INFO][4308] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.132 [INFO][4308] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866 Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.138 [INFO][4308] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.147 [INFO][4308] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.67/26] block=192.168.13.64/26 handle="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.147 [INFO][4308] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.67/26] handle="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.147 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:53.178185 containerd[1476]: 2024-12-13 02:00:53.147 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.67/26] IPv6=[] ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" HandleID="k8s-pod-network.e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.149 [INFO][4300] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3d48b786-1c23-46ba-b027-707a91594565", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"coredns-6f6b679f8f-f5bh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00a70c4c76b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.149 [INFO][4300] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.67/32] ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.149 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00a70c4c76b ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.153 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.155 [INFO][4300] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3d48b786-1c23-46ba-b027-707a91594565", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866", Pod:"coredns-6f6b679f8f-f5bh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00a70c4c76b", MAC:"fa:4b:4d:d9:e7:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:53.181274 containerd[1476]: 2024-12-13 02:00:53.174 [INFO][4300] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866" Namespace="kube-system" Pod="coredns-6f6b679f8f-f5bh8" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:00:53.206246 containerd[1476]: time="2024-12-13T02:00:53.205836896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:53.206246 containerd[1476]: time="2024-12-13T02:00:53.205893698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:53.206246 containerd[1476]: time="2024-12-13T02:00:53.205916858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:53.206613 containerd[1476]: time="2024-12-13T02:00:53.206245984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:53.239248 systemd[1]: Started cri-containerd-e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866.scope - libcontainer container e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866. Dec 13 02:00:53.289583 containerd[1476]: time="2024-12-13T02:00:53.289533104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5bh8,Uid:3d48b786-1c23-46ba-b027-707a91594565,Namespace:kube-system,Attempt:1,} returns sandbox id \"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866\"" Dec 13 02:00:53.295744 containerd[1476]: time="2024-12-13T02:00:53.295689580Z" level=info msg="CreateContainer within sandbox \"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:00:53.322392 containerd[1476]: time="2024-12-13T02:00:53.321628466Z" level=info msg="CreateContainer within sandbox \"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed3cd42c7ec9b0504b26589f279168fc988557f3a391990b2d824c3d8a424df8\"" Dec 13 02:00:53.322537 containerd[1476]: time="2024-12-13T02:00:53.322509362Z" level=info msg="StartContainer for \"ed3cd42c7ec9b0504b26589f279168fc988557f3a391990b2d824c3d8a424df8\"" Dec 13 02:00:53.349201 systemd[1]: Started cri-containerd-ed3cd42c7ec9b0504b26589f279168fc988557f3a391990b2d824c3d8a424df8.scope - libcontainer container ed3cd42c7ec9b0504b26589f279168fc988557f3a391990b2d824c3d8a424df8. Dec 13 02:00:53.388892 containerd[1476]: time="2024-12-13T02:00:53.388852125Z" level=info msg="StartContainer for \"ed3cd42c7ec9b0504b26589f279168fc988557f3a391990b2d824c3d8a424df8\" returns successfully" Dec 13 02:00:53.878515 containerd[1476]: time="2024-12-13T02:00:53.878049729Z" level=info msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" Dec 13 02:00:53.878714 containerd[1476]: time="2024-12-13T02:00:53.878661381Z" level=info msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" Dec 13 02:00:53.969555 systemd-networkd[1368]: cali582755c968a: Gained IPv6LL Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.962 [INFO][4436] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.963 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" iface="eth0" netns="/var/run/netns/cni-9a91296a-f1be-b20c-cbf2-cbfd9caaf63e" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.963 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" iface="eth0" netns="/var/run/netns/cni-9a91296a-f1be-b20c-cbf2-cbfd9caaf63e" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.963 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" iface="eth0" netns="/var/run/netns/cni-9a91296a-f1be-b20c-cbf2-cbfd9caaf63e" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.963 [INFO][4436] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.963 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.991 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.991 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:53.991 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:54.002 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:54.002 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:54.004 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:54.011270 containerd[1476]: 2024-12-13 02:00:54.007 [INFO][4436] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:00:54.011270 containerd[1476]: time="2024-12-13T02:00:54.011121145Z" level=info msg="TearDown network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" successfully" Dec 13 02:00:54.011270 containerd[1476]: time="2024-12-13T02:00:54.011160585Z" level=info msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" returns successfully" Dec 13 02:00:54.014191 containerd[1476]: time="2024-12-13T02:00:54.013777755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlsvx,Uid:e913ba99-51c8-4660-80f1-d499d5133b83,Namespace:kube-system,Attempt:1,}" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.943 [INFO][4435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.943 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" iface="eth0" netns="/var/run/netns/cni-b783c257-fa37-749b-5fbf-25935ab6f74a" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.944 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" iface="eth0" netns="/var/run/netns/cni-b783c257-fa37-749b-5fbf-25935ab6f74a" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.946 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" iface="eth0" netns="/var/run/netns/cni-b783c257-fa37-749b-5fbf-25935ab6f74a" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.947 [INFO][4435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:53.947 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.000 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.001 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.004 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.021 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.022 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.024 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:54.031380 containerd[1476]: 2024-12-13 02:00:54.027 [INFO][4435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:00:54.032461 containerd[1476]: time="2024-12-13T02:00:54.031785097Z" level=info msg="TearDown network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" successfully" Dec 13 02:00:54.032461 containerd[1476]: time="2024-12-13T02:00:54.031813497Z" level=info msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" returns successfully" Dec 13 02:00:54.033001 containerd[1476]: time="2024-12-13T02:00:54.032960679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6685777db6-ng5ph,Uid:b0f89edf-f3cb-4a33-b829-5564e6aeb598,Namespace:calico-system,Attempt:1,}" Dec 13 02:00:54.053086 systemd[1]: run-netns-cni\x2db783c257\x2dfa37\x2d749b\x2d5fbf\x2d25935ab6f74a.mount: Deactivated successfully. Dec 13 02:00:54.053176 systemd[1]: run-netns-cni\x2d9a91296a\x2df1be\x2db20c\x2dcbf2\x2dcbfd9caaf63e.mount: Deactivated successfully. Dec 13 02:00:54.134273 kubelet[2674]: I1213 02:00:54.133970 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-f5bh8" podStartSLOduration=39.133855193 podStartE2EDuration="39.133855193s" podCreationTimestamp="2024-12-13 02:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:54.125619917 +0000 UTC m=+46.393186147" watchObservedRunningTime="2024-12-13 02:00:54.133855193 +0000 UTC m=+46.401421423" Dec 13 02:00:54.251974 systemd-networkd[1368]: cali12690c5241c: Link UP Dec 13 02:00:54.253284 systemd-networkd[1368]: cali12690c5241c: Gained carrier Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.102 [INFO][4462] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0 coredns-6f6b679f8f- kube-system e913ba99-51c8-4660-80f1-d499d5133b83 769 0 2024-12-13 02:00:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 coredns-6f6b679f8f-xlsvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12690c5241c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.102 [INFO][4462] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.172 [INFO][4485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" HandleID="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.198 [INFO][4485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" HandleID="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d3c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-4-277531bf34", "pod":"coredns-6f6b679f8f-xlsvx", "timestamp":"2024-12-13 02:00:54.172900653 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.199 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.199 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.199 [INFO][4485] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.203 [INFO][4485] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.214 [INFO][4485] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.221 [INFO][4485] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.224 [INFO][4485] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.227 [INFO][4485] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.227 [INFO][4485] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.229 [INFO][4485] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.235 [INFO][4485] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.244 [INFO][4485] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.68/26] block=192.168.13.64/26 handle="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.245 [INFO][4485] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.68/26] handle="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.245 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:54.276866 containerd[1476]: 2024-12-13 02:00:54.245 [INFO][4485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.68/26] IPv6=[] ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" HandleID="k8s-pod-network.7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.248 [INFO][4462] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e913ba99-51c8-4660-80f1-d499d5133b83", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"coredns-6f6b679f8f-xlsvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12690c5241c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.248 [INFO][4462] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.68/32] ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.248 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12690c5241c ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.253 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.254 [INFO][4462] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e913ba99-51c8-4660-80f1-d499d5133b83", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d", Pod:"coredns-6f6b679f8f-xlsvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12690c5241c", MAC:"26:48:05:93:e8:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:54.279640 containerd[1476]: 2024-12-13 02:00:54.274 [INFO][4462] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xlsvx" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:00:54.312984 containerd[1476]: time="2024-12-13T02:00:54.312667905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:54.312984 containerd[1476]: time="2024-12-13T02:00:54.312753186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:54.312984 containerd[1476]: time="2024-12-13T02:00:54.312765107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:54.312984 containerd[1476]: time="2024-12-13T02:00:54.312880909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:54.348530 systemd[1]: Started cri-containerd-7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d.scope - libcontainer container 7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d. Dec 13 02:00:54.369941 systemd-networkd[1368]: cali676c599a15b: Link UP Dec 13 02:00:54.371071 systemd-networkd[1368]: cali676c599a15b: Gained carrier Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.148 [INFO][4470] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0 calico-kube-controllers-6685777db6- calico-system b0f89edf-f3cb-4a33-b829-5564e6aeb598 768 0 2024-12-13 02:00:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6685777db6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 calico-kube-controllers-6685777db6-ng5ph eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali676c599a15b [] []}} ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.148 [INFO][4470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.213 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" HandleID="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.227 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" HandleID="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-4-277531bf34", "pod":"calico-kube-controllers-6685777db6-ng5ph", "timestamp":"2024-12-13 02:00:54.212992254 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.227 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.246 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.246 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.304 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.312 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.325 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.329 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.337 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.337 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.340 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.348 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.360 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.69/26] block=192.168.13.64/26 handle="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.360 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.69/26] handle="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.360 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:54.390582 containerd[1476]: 2024-12-13 02:00:54.360 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.69/26] IPv6=[] ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" HandleID="k8s-pod-network.e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.363 [INFO][4470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0", GenerateName:"calico-kube-controllers-6685777db6-", Namespace:"calico-system", SelfLink:"", UID:"b0f89edf-f3cb-4a33-b829-5564e6aeb598", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6685777db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"calico-kube-controllers-6685777db6-ng5ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676c599a15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.363 [INFO][4470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.69/32] ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.363 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali676c599a15b ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.371 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.372 [INFO][4470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0", GenerateName:"calico-kube-controllers-6685777db6-", Namespace:"calico-system", SelfLink:"", UID:"b0f89edf-f3cb-4a33-b829-5564e6aeb598", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6685777db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f", Pod:"calico-kube-controllers-6685777db6-ng5ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676c599a15b", MAC:"66:4e:2c:1f:88:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:54.391816 containerd[1476]: 2024-12-13 02:00:54.386 [INFO][4470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f" Namespace="calico-system" Pod="calico-kube-controllers-6685777db6-ng5ph" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:00:54.421514 containerd[1476]: time="2024-12-13T02:00:54.421387247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlsvx,Uid:e913ba99-51c8-4660-80f1-d499d5133b83,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d\"" Dec 13 02:00:54.429139 containerd[1476]: time="2024-12-13T02:00:54.428008173Z" level=info msg="CreateContainer within sandbox \"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:00:54.436997 containerd[1476]: time="2024-12-13T02:00:54.436680457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:54.436997 containerd[1476]: time="2024-12-13T02:00:54.436754939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:54.436997 containerd[1476]: time="2024-12-13T02:00:54.436766059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:54.436997 containerd[1476]: time="2024-12-13T02:00:54.436851820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:54.447283 containerd[1476]: time="2024-12-13T02:00:54.447231057Z" level=info msg="CreateContainer within sandbox \"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d70e9b0d376bc59f62da4441d1a189b2d73f1a0270f275f8337bc9d959156d59\"" Dec 13 02:00:54.449270 containerd[1476]: time="2024-12-13T02:00:54.448510682Z" level=info msg="StartContainer for \"d70e9b0d376bc59f62da4441d1a189b2d73f1a0270f275f8337bc9d959156d59\"" Dec 13 02:00:54.467195 systemd[1]: Started cri-containerd-e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f.scope - libcontainer container e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f. Dec 13 02:00:54.476476 systemd[1]: Started cri-containerd-d70e9b0d376bc59f62da4441d1a189b2d73f1a0270f275f8337bc9d959156d59.scope - libcontainer container d70e9b0d376bc59f62da4441d1a189b2d73f1a0270f275f8337bc9d959156d59. Dec 13 02:00:54.515610 containerd[1476]: time="2024-12-13T02:00:54.515518553Z" level=info msg="StartContainer for \"d70e9b0d376bc59f62da4441d1a189b2d73f1a0270f275f8337bc9d959156d59\" returns successfully" Dec 13 02:00:54.546491 containerd[1476]: time="2024-12-13T02:00:54.546376258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6685777db6-ng5ph,Uid:b0f89edf-f3cb-4a33-b829-5564e6aeb598,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f\"" Dec 13 02:00:54.737548 systemd-networkd[1368]: cali00a70c4c76b: Gained IPv6LL Dec 13 02:00:55.053320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364296829.mount: Deactivated successfully. Dec 13 02:00:55.441179 systemd-networkd[1368]: cali12690c5241c: Gained IPv6LL Dec 13 02:00:55.633637 systemd-networkd[1368]: cali676c599a15b: Gained IPv6LL Dec 13 02:00:55.911558 containerd[1476]: time="2024-12-13T02:00:55.909453521Z" level=info msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" Dec 13 02:00:56.028642 containerd[1476]: time="2024-12-13T02:00:56.028589214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:56.030332 containerd[1476]: time="2024-12-13T02:00:56.030083363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 02:00:56.033800 kubelet[2674]: I1213 02:00:56.033378 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xlsvx" podStartSLOduration=41.033343067 podStartE2EDuration="41.033343067s" podCreationTimestamp="2024-12-13 02:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:55.151267806 +0000 UTC m=+47.418834036" watchObservedRunningTime="2024-12-13 02:00:56.033343067 +0000 UTC m=+48.300909297" Dec 13 02:00:56.037108 containerd[1476]: time="2024-12-13T02:00:56.036656691Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:56.042566 containerd[1476]: time="2024-12-13T02:00:56.042516845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:56.044210 containerd[1476]: time="2024-12-13T02:00:56.044083115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.700693581s" Dec 13 02:00:56.044210 containerd[1476]: time="2024-12-13T02:00:56.044124436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 02:00:56.047908 containerd[1476]: time="2024-12-13T02:00:56.047780947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:00:56.051893 containerd[1476]: time="2024-12-13T02:00:56.051411138Z" level=info msg="CreateContainer within sandbox \"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:00:56.095182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193757439.mount: Deactivated successfully. Dec 13 02:00:56.098339 containerd[1476]: time="2024-12-13T02:00:56.098288208Z" level=info msg="CreateContainer within sandbox \"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83ec96f953eba2b00858fc2f796538476459bfe19c498f8930dadbff73e7a11f\"" Dec 13 02:00:56.101494 containerd[1476]: time="2024-12-13T02:00:56.101045941Z" level=info msg="StartContainer for \"83ec96f953eba2b00858fc2f796538476459bfe19c498f8930dadbff73e7a11f\"" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.032 [INFO][4669] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.036 [INFO][4669] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" iface="eth0" netns="/var/run/netns/cni-50b85022-1e9b-6c80-d5ff-a1fae7ded32f" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.036 [INFO][4669] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" iface="eth0" netns="/var/run/netns/cni-50b85022-1e9b-6c80-d5ff-a1fae7ded32f" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.037 [INFO][4669] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" iface="eth0" netns="/var/run/netns/cni-50b85022-1e9b-6c80-d5ff-a1fae7ded32f" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.037 [INFO][4669] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.037 [INFO][4669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.102 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.104 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.104 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.124 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.124 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.132 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:56.142372 containerd[1476]: 2024-12-13 02:00:56.137 [INFO][4669] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:00:56.148732 containerd[1476]: time="2024-12-13T02:00:56.144809471Z" level=info msg="TearDown network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" successfully" Dec 13 02:00:56.148732 containerd[1476]: time="2024-12-13T02:00:56.145052396Z" level=info msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" returns successfully" Dec 13 02:00:56.148732 containerd[1476]: time="2024-12-13T02:00:56.148259098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-fzbtl,Uid:7522399d-902f-4e35-8d1b-67a7dba93ba9,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:00:56.147181 systemd[1]: run-netns-cni\x2d50b85022\x2d1e9b\x2d6c80\x2dd5ff\x2da1fae7ded32f.mount: Deactivated successfully. Dec 13 02:00:56.168898 systemd[1]: Started cri-containerd-83ec96f953eba2b00858fc2f796538476459bfe19c498f8930dadbff73e7a11f.scope - libcontainer container 83ec96f953eba2b00858fc2f796538476459bfe19c498f8930dadbff73e7a11f. Dec 13 02:00:56.282568 containerd[1476]: time="2024-12-13T02:00:56.282514825Z" level=info msg="StartContainer for \"83ec96f953eba2b00858fc2f796538476459bfe19c498f8930dadbff73e7a11f\" returns successfully" Dec 13 02:00:56.372667 systemd-networkd[1368]: cali4f0634065cc: Link UP Dec 13 02:00:56.372916 systemd-networkd[1368]: cali4f0634065cc: Gained carrier Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.230 [INFO][4726] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0 calico-apiserver-f8df5958f- calico-apiserver 7522399d-902f-4e35-8d1b-67a7dba93ba9 796 0 2024-12-13 02:00:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8df5958f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-4-277531bf34 calico-apiserver-f8df5958f-fzbtl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f0634065cc [] []}} ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.231 [INFO][4726] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.287 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" HandleID="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.311 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" HandleID="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-4-277531bf34", "pod":"calico-apiserver-f8df5958f-fzbtl", "timestamp":"2024-12-13 02:00:56.287631964 +0000 UTC"}, Hostname:"ci-4081-2-1-4-277531bf34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.311 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.311 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.311 [INFO][4748] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-277531bf34' Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.315 [INFO][4748] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.320 [INFO][4748] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.336 [INFO][4748] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.339 [INFO][4748] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.344 [INFO][4748] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.344 [INFO][4748] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.347 [INFO][4748] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.354 [INFO][4748] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.360 [INFO][4748] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.70/26] block=192.168.13.64/26 handle="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.360 [INFO][4748] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.70/26] handle="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" host="ci-4081-2-1-4-277531bf34" Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.360 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:00:56.392674 containerd[1476]: 2024-12-13 02:00:56.360 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.70/26] IPv6=[] ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" HandleID="k8s-pod-network.87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.363 [INFO][4726] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7522399d-902f-4e35-8d1b-67a7dba93ba9", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"", Pod:"calico-apiserver-f8df5958f-fzbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f0634065cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.363 [INFO][4726] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.70/32] ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.363 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f0634065cc ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.371 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.371 [INFO][4726] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7522399d-902f-4e35-8d1b-67a7dba93ba9", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf", Pod:"calico-apiserver-f8df5958f-fzbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f0634065cc", MAC:"ba:ca:cd:54:8d:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:00:56.393267 containerd[1476]: 2024-12-13 02:00:56.386 [INFO][4726] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf" Namespace="calico-apiserver" Pod="calico-apiserver-f8df5958f-fzbtl" WorkloadEndpoint="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:00:56.435548 containerd[1476]: time="2024-12-13T02:00:56.435173229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:56.435548 containerd[1476]: time="2024-12-13T02:00:56.435242871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:56.435548 containerd[1476]: time="2024-12-13T02:00:56.435258111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:56.435548 containerd[1476]: time="2024-12-13T02:00:56.435373073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:56.457323 systemd[1]: Started cri-containerd-87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf.scope - libcontainer container 87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf. Dec 13 02:00:56.546524 containerd[1476]: time="2024-12-13T02:00:56.546434430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8df5958f-fzbtl,Uid:7522399d-902f-4e35-8d1b-67a7dba93ba9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf\"" Dec 13 02:00:56.551610 containerd[1476]: time="2024-12-13T02:00:56.551406926Z" level=info msg="CreateContainer within sandbox \"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:00:56.569912 containerd[1476]: time="2024-12-13T02:00:56.569866285Z" level=info msg="CreateContainer within sandbox \"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d464ece2a0e2d8b5aee3304e1349719a2f3591706accaa67cb4b73d9b29a261\"" Dec 13 02:00:56.572028 containerd[1476]: time="2024-12-13T02:00:56.571968406Z" level=info msg="StartContainer for \"4d464ece2a0e2d8b5aee3304e1349719a2f3591706accaa67cb4b73d9b29a261\"" Dec 13 02:00:56.610231 systemd[1]: Started cri-containerd-4d464ece2a0e2d8b5aee3304e1349719a2f3591706accaa67cb4b73d9b29a261.scope - libcontainer container 4d464ece2a0e2d8b5aee3304e1349719a2f3591706accaa67cb4b73d9b29a261. Dec 13 02:00:56.668704 containerd[1476]: time="2024-12-13T02:00:56.668659083Z" level=info msg="StartContainer for \"4d464ece2a0e2d8b5aee3304e1349719a2f3591706accaa67cb4b73d9b29a261\" returns successfully" Dec 13 02:00:57.201983 kubelet[2674]: I1213 02:00:57.201278 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8df5958f-q5lr8" podStartSLOduration=28.49659205 podStartE2EDuration="33.201261188s" podCreationTimestamp="2024-12-13 02:00:24 +0000 UTC" firstStartedPulling="2024-12-13 02:00:51.342767802 +0000 UTC m=+43.610334032" lastFinishedPulling="2024-12-13 02:00:56.04743694 +0000 UTC m=+48.315003170" observedRunningTime="2024-12-13 02:00:57.201035304 +0000 UTC m=+49.468601614" watchObservedRunningTime="2024-12-13 02:00:57.201261188 +0000 UTC m=+49.468827418" Dec 13 02:00:57.745261 systemd-networkd[1368]: cali4f0634065cc: Gained IPv6LL Dec 13 02:00:57.937077 containerd[1476]: time="2024-12-13T02:00:57.936379140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:57.937458 containerd[1476]: time="2024-12-13T02:00:57.937166795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 02:00:57.938644 containerd[1476]: time="2024-12-13T02:00:57.938573703Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:57.943052 containerd[1476]: time="2024-12-13T02:00:57.942396778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:57.945245 containerd[1476]: time="2024-12-13T02:00:57.944226934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.896389185s" Dec 13 02:00:57.945245 containerd[1476]: time="2024-12-13T02:00:57.944272895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 02:00:57.949432 containerd[1476]: time="2024-12-13T02:00:57.949313874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:00:57.955000 containerd[1476]: time="2024-12-13T02:00:57.954936464Z" level=info msg="CreateContainer within sandbox \"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:00:57.989378 containerd[1476]: time="2024-12-13T02:00:57.989330379Z" level=info msg="CreateContainer within sandbox \"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d2c6bedb7e6535c82f77c8a3acd06d11595f917bdc21b8b0c720ab52e197c10c\"" Dec 13 02:00:57.991466 containerd[1476]: time="2024-12-13T02:00:57.991431581Z" level=info msg="StartContainer for \"d2c6bedb7e6535c82f77c8a3acd06d11595f917bdc21b8b0c720ab52e197c10c\"" Dec 13 02:00:58.042917 systemd[1]: Started cri-containerd-d2c6bedb7e6535c82f77c8a3acd06d11595f917bdc21b8b0c720ab52e197c10c.scope - libcontainer container d2c6bedb7e6535c82f77c8a3acd06d11595f917bdc21b8b0c720ab52e197c10c. Dec 13 02:00:58.099049 containerd[1476]: time="2024-12-13T02:00:58.098973232Z" level=info msg="StartContainer for \"d2c6bedb7e6535c82f77c8a3acd06d11595f917bdc21b8b0c720ab52e197c10c\" returns successfully" Dec 13 02:00:58.193048 kubelet[2674]: I1213 02:00:58.192839 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:00:58.195425 kubelet[2674]: I1213 02:00:58.193456 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:00:59.633518 containerd[1476]: time="2024-12-13T02:00:59.633446802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:59.635229 containerd[1476]: time="2024-12-13T02:00:59.635188237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 02:00:59.636711 containerd[1476]: time="2024-12-13T02:00:59.636664826Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:59.639164 containerd[1476]: time="2024-12-13T02:00:59.639075115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:00:59.640678 containerd[1476]: time="2024-12-13T02:00:59.640627426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.69124559s" Dec 13 02:00:59.640678 containerd[1476]: time="2024-12-13T02:00:59.640674747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 02:00:59.642136 containerd[1476]: time="2024-12-13T02:00:59.642092655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:00:59.662974 containerd[1476]: time="2024-12-13T02:00:59.662926393Z" level=info msg="CreateContainer within sandbox \"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:00:59.678834 containerd[1476]: time="2024-12-13T02:00:59.678779110Z" level=info msg="CreateContainer within sandbox \"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a\"" Dec 13 02:00:59.680489 containerd[1476]: time="2024-12-13T02:00:59.680455264Z" level=info msg="StartContainer for \"547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a\"" Dec 13 02:00:59.723261 systemd[1]: Started cri-containerd-547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a.scope - libcontainer container 547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a. Dec 13 02:00:59.769423 containerd[1476]: time="2024-12-13T02:00:59.769099680Z" level=info msg="StartContainer for \"547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a\" returns successfully" Dec 13 02:01:00.232241 kubelet[2674]: I1213 02:01:00.230919 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8df5958f-fzbtl" podStartSLOduration=36.230900699 podStartE2EDuration="36.230900699s" podCreationTimestamp="2024-12-13 02:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:57.219355823 +0000 UTC m=+49.486922093" watchObservedRunningTime="2024-12-13 02:01:00.230900699 +0000 UTC m=+52.498466929" Dec 13 02:01:00.234697 kubelet[2674]: I1213 02:01:00.233715 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6685777db6-ng5ph" podStartSLOduration=30.141823752 podStartE2EDuration="35.233699716s" podCreationTimestamp="2024-12-13 02:00:25 +0000 UTC" firstStartedPulling="2024-12-13 02:00:54.549935125 +0000 UTC m=+46.817501355" lastFinishedPulling="2024-12-13 02:00:59.641811049 +0000 UTC m=+51.909377319" observedRunningTime="2024-12-13 02:01:00.232238366 +0000 UTC m=+52.499804596" watchObservedRunningTime="2024-12-13 02:01:00.233699716 +0000 UTC m=+52.501265946" Dec 13 02:01:01.166197 containerd[1476]: time="2024-12-13T02:01:01.166135135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:01:01.168301 containerd[1476]: time="2024-12-13T02:01:01.168164336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 02:01:01.170007 containerd[1476]: time="2024-12-13T02:01:01.169135196Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:01:01.182688 containerd[1476]: time="2024-12-13T02:01:01.182078460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:01:01.183592 containerd[1476]: time="2024-12-13T02:01:01.183124482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.540993826s" Dec 13 02:01:01.183592 containerd[1476]: time="2024-12-13T02:01:01.183183043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 02:01:01.188472 containerd[1476]: time="2024-12-13T02:01:01.188414790Z" level=info msg="CreateContainer within sandbox \"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:01:01.213363 containerd[1476]: time="2024-12-13T02:01:01.213308658Z" level=info msg="CreateContainer within sandbox \"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6803e9b0d434ee3201e0acffe4c5ed474b8239bbecee83b8de4ff3b2d56ffb08\"" Dec 13 02:01:01.214788 containerd[1476]: time="2024-12-13T02:01:01.214723887Z" level=info msg="StartContainer for \"6803e9b0d434ee3201e0acffe4c5ed474b8239bbecee83b8de4ff3b2d56ffb08\"" Dec 13 02:01:01.260990 systemd[1]: Started cri-containerd-6803e9b0d434ee3201e0acffe4c5ed474b8239bbecee83b8de4ff3b2d56ffb08.scope - libcontainer container 6803e9b0d434ee3201e0acffe4c5ed474b8239bbecee83b8de4ff3b2d56ffb08. Dec 13 02:01:01.343076 containerd[1476]: time="2024-12-13T02:01:01.342994347Z" level=info msg="StartContainer for \"6803e9b0d434ee3201e0acffe4c5ed474b8239bbecee83b8de4ff3b2d56ffb08\" returns successfully" Dec 13 02:01:02.003131 kubelet[2674]: I1213 02:01:02.003085 2674 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:01:02.015079 kubelet[2674]: I1213 02:01:02.014945 2674 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:01:04.902054 update_engine[1451]: I20241213 02:01:04.900146 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 02:01:04.902054 update_engine[1451]: I20241213 02:01:04.901954 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 02:01:04.902534 update_engine[1451]: I20241213 02:01:04.902285 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 02:01:04.903208 update_engine[1451]: I20241213 02:01:04.903172 1451 omaha_request_params.cc:62] Current group set to stable Dec 13 02:01:04.903309 update_engine[1451]: I20241213 02:01:04.903291 1451 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 02:01:04.903343 update_engine[1451]: I20241213 02:01:04.903306 1451 update_attempter.cc:643] Scheduling an action processor start. Dec 13 02:01:04.903343 update_engine[1451]: I20241213 02:01:04.903324 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:01:04.903383 update_engine[1451]: I20241213 02:01:04.903362 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 02:01:04.903439 update_engine[1451]: I20241213 02:01:04.903424 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:01:04.903471 update_engine[1451]: I20241213 02:01:04.903436 1451 omaha_request_action.cc:272] Request: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: Dec 13 02:01:04.903471 update_engine[1451]: I20241213 02:01:04.903445 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:01:04.904408 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 02:01:04.907168 update_engine[1451]: I20241213 02:01:04.907126 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:01:04.907524 update_engine[1451]: I20241213 02:01:04.907483 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:01:04.908575 update_engine[1451]: E20241213 02:01:04.908535 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:01:04.908627 update_engine[1451]: I20241213 02:01:04.908610 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 02:01:07.867129 containerd[1476]: time="2024-12-13T02:01:07.867082973Z" level=info msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.926 [WARNING][5036] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c72753a3-f6db-44d3-aefe-df577750df39", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9", Pod:"csi-node-driver-mb9nr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali582755c968a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.927 [INFO][5036] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.927 [INFO][5036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" iface="eth0" netns="" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.927 [INFO][5036] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.927 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.953 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.953 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.953 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.967 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.967 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.970 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:07.975882 containerd[1476]: 2024-12-13 02:01:07.973 [INFO][5036] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:07.976623 containerd[1476]: time="2024-12-13T02:01:07.975916506Z" level=info msg="TearDown network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" successfully" Dec 13 02:01:07.976623 containerd[1476]: time="2024-12-13T02:01:07.976190632Z" level=info msg="StopPodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" returns successfully" Dec 13 02:01:07.976671 containerd[1476]: time="2024-12-13T02:01:07.976610281Z" level=info msg="RemovePodSandbox for \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" Dec 13 02:01:07.986834 containerd[1476]: time="2024-12-13T02:01:07.986775459Z" level=info msg="Forcibly stopping sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\"" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.046 [WARNING][5062] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c72753a3-f6db-44d3-aefe-df577750df39", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"527b0b515bcd6f4d2763601bc0fc426c30f20cc61ab576aa6666ee60041233e9", Pod:"csi-node-driver-mb9nr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali582755c968a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.046 [INFO][5062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.046 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" iface="eth0" netns="" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.047 [INFO][5062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.047 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.076 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.076 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.076 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.086 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.086 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" HandleID="k8s-pod-network.73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Workload="ci--4081--2--1--4--277531bf34-k8s-csi--node--driver--mb9nr-eth0" Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.088 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.092487 containerd[1476]: 2024-12-13 02:01:08.090 [INFO][5062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc" Dec 13 02:01:08.093115 containerd[1476]: time="2024-12-13T02:01:08.093081311Z" level=info msg="TearDown network for sandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" successfully" Dec 13 02:01:08.098786 containerd[1476]: time="2024-12-13T02:01:08.098732793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:08.098949 containerd[1476]: time="2024-12-13T02:01:08.098820195Z" level=info msg="RemovePodSandbox \"73d0ab555528026f0c2f2c1a1f52eb9f13161249bc7a6572dea6012f776752cc\" returns successfully" Dec 13 02:01:08.100250 containerd[1476]: time="2024-12-13T02:01:08.099527131Z" level=info msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.147 [WARNING][5086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78fb6a3-ca79-4137-a8f3-a349aa537780", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6", Pod:"calico-apiserver-f8df5958f-q5lr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9757d3b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.147 [INFO][5086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.147 [INFO][5086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" iface="eth0" netns="" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.147 [INFO][5086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.147 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.190 [INFO][5092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.191 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.191 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.206 [WARNING][5092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.206 [INFO][5092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.208 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.212745 containerd[1476]: 2024-12-13 02:01:08.210 [INFO][5086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.212745 containerd[1476]: time="2024-12-13T02:01:08.212468929Z" level=info msg="TearDown network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" successfully" Dec 13 02:01:08.212745 containerd[1476]: time="2024-12-13T02:01:08.212494889Z" level=info msg="StopPodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" returns successfully" Dec 13 02:01:08.214632 containerd[1476]: time="2024-12-13T02:01:08.213434349Z" level=info msg="RemovePodSandbox for \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" Dec 13 02:01:08.214632 containerd[1476]: time="2024-12-13T02:01:08.213468470Z" level=info msg="Forcibly stopping sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\"" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.277 [WARNING][5113] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78fb6a3-ca79-4137-a8f3-a349aa537780", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"19b61298069c699d550da9c8e52109adea0989a217911702faa4292a8ba9c2e6", Pod:"calico-apiserver-f8df5958f-q5lr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9757d3b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.277 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.278 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" iface="eth0" netns="" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.278 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.278 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.299 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.300 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.300 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.311 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.311 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" HandleID="k8s-pod-network.50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--q5lr8-eth0" Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.314 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.317084 containerd[1476]: 2024-12-13 02:01:08.315 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863" Dec 13 02:01:08.318798 containerd[1476]: time="2024-12-13T02:01:08.317970806Z" level=info msg="TearDown network for sandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" successfully" Dec 13 02:01:08.322449 containerd[1476]: time="2024-12-13T02:01:08.322234298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:08.322449 containerd[1476]: time="2024-12-13T02:01:08.322324780Z" level=info msg="RemovePodSandbox \"50a73da35bcc71888c4abfbffda58a57244803f611bd4a03333e59afb47cf863\" returns successfully" Dec 13 02:01:08.323208 containerd[1476]: time="2024-12-13T02:01:08.322814911Z" level=info msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.373 [WARNING][5139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e913ba99-51c8-4660-80f1-d499d5133b83", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d", Pod:"coredns-6f6b679f8f-xlsvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12690c5241c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.374 [INFO][5139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.374 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" iface="eth0" netns="" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.374 [INFO][5139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.374 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.398 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.398 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.398 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.411 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.411 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.413 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.418130 containerd[1476]: 2024-12-13 02:01:08.415 [INFO][5139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.418559 containerd[1476]: time="2024-12-13T02:01:08.418163529Z" level=info msg="TearDown network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" successfully" Dec 13 02:01:08.418559 containerd[1476]: time="2024-12-13T02:01:08.418192170Z" level=info msg="StopPodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" returns successfully" Dec 13 02:01:08.418980 containerd[1476]: time="2024-12-13T02:01:08.418655380Z" level=info msg="RemovePodSandbox for \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" Dec 13 02:01:08.418980 containerd[1476]: time="2024-12-13T02:01:08.418706741Z" level=info msg="Forcibly stopping sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\"" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.483 [WARNING][5164] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e913ba99-51c8-4660-80f1-d499d5133b83", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"7ae5b1c2804194c15743f5b8c833dd9c4818c55fc0b6f4c5eb584e6c83b6064d", Pod:"coredns-6f6b679f8f-xlsvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12690c5241c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.484 [INFO][5164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.484 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" iface="eth0" netns="" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.484 [INFO][5164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.484 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.507 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.508 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.508 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.518 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.518 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" HandleID="k8s-pod-network.1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--xlsvx-eth0" Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.520 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.524729 containerd[1476]: 2024-12-13 02:01:08.522 [INFO][5164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b" Dec 13 02:01:08.524729 containerd[1476]: time="2024-12-13T02:01:08.524685269Z" level=info msg="TearDown network for sandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" successfully" Dec 13 02:01:08.528386 containerd[1476]: time="2024-12-13T02:01:08.528349188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:08.528458 containerd[1476]: time="2024-12-13T02:01:08.528421109Z" level=info msg="RemovePodSandbox \"1ad33a6287454b56f1e3763f93e811608b321cb795342821c5ee26ff3605464b\" returns successfully" Dec 13 02:01:08.529210 containerd[1476]: time="2024-12-13T02:01:08.528944560Z" level=info msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.573 [WARNING][5188] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3d48b786-1c23-46ba-b027-707a91594565", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866", Pod:"coredns-6f6b679f8f-f5bh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00a70c4c76b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.573 [INFO][5188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.573 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" iface="eth0" netns="" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.573 [INFO][5188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.573 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.592 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.593 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.593 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.601 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.601 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.603 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.606198 containerd[1476]: 2024-12-13 02:01:08.604 [INFO][5188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.607574 containerd[1476]: time="2024-12-13T02:01:08.606336791Z" level=info msg="TearDown network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" successfully" Dec 13 02:01:08.607574 containerd[1476]: time="2024-12-13T02:01:08.606364072Z" level=info msg="StopPodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" returns successfully" Dec 13 02:01:08.607574 containerd[1476]: time="2024-12-13T02:01:08.606792721Z" level=info msg="RemovePodSandbox for \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" Dec 13 02:01:08.607574 containerd[1476]: time="2024-12-13T02:01:08.606830682Z" level=info msg="Forcibly stopping sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\"" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.646 [WARNING][5212] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3d48b786-1c23-46ba-b027-707a91594565", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e591f362e9f272acb3054d1ae35abfe0868e3d6edb7ede49e9c6002026f29866", Pod:"coredns-6f6b679f8f-f5bh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00a70c4c76b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.647 [INFO][5212] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.647 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" iface="eth0" netns="" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.647 [INFO][5212] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.647 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.666 [INFO][5218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.666 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.666 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.678 [WARNING][5218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.678 [INFO][5218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" HandleID="k8s-pod-network.d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Workload="ci--4081--2--1--4--277531bf34-k8s-coredns--6f6b679f8f--f5bh8-eth0" Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.680 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.684070 containerd[1476]: 2024-12-13 02:01:08.681 [INFO][5212] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919" Dec 13 02:01:08.684070 containerd[1476]: time="2024-12-13T02:01:08.683304813Z" level=info msg="TearDown network for sandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" successfully" Dec 13 02:01:08.687104 containerd[1476]: time="2024-12-13T02:01:08.687062294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:08.687292 containerd[1476]: time="2024-12-13T02:01:08.687274138Z" level=info msg="RemovePodSandbox \"d8ea9785b7292556544746a0b1e9e0387cafb86868baad4f7a7118548b877919\" returns successfully" Dec 13 02:01:08.687965 containerd[1476]: time="2024-12-13T02:01:08.687941553Z" level=info msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.736 [WARNING][5236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0", GenerateName:"calico-kube-controllers-6685777db6-", Namespace:"calico-system", SelfLink:"", UID:"b0f89edf-f3cb-4a33-b829-5564e6aeb598", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6685777db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f", Pod:"calico-kube-controllers-6685777db6-ng5ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676c599a15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.737 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.737 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" iface="eth0" netns="" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.737 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.737 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.759 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.759 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.759 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.771 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.771 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.773 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.778188 containerd[1476]: 2024-12-13 02:01:08.775 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.778188 containerd[1476]: time="2024-12-13T02:01:08.778158020Z" level=info msg="TearDown network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" successfully" Dec 13 02:01:08.778725 containerd[1476]: time="2024-12-13T02:01:08.778199981Z" level=info msg="StopPodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" returns successfully" Dec 13 02:01:08.780560 containerd[1476]: time="2024-12-13T02:01:08.780508431Z" level=info msg="RemovePodSandbox for \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" Dec 13 02:01:08.780646 containerd[1476]: time="2024-12-13T02:01:08.780578953Z" level=info msg="Forcibly stopping sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\"" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.825 [WARNING][5261] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0", GenerateName:"calico-kube-controllers-6685777db6-", Namespace:"calico-system", SelfLink:"", UID:"b0f89edf-f3cb-4a33-b829-5564e6aeb598", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6685777db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"e2d4aecfa176bc9f899bcc64caa484fa5260384513781afb2d02637f1930325f", Pod:"calico-kube-controllers-6685777db6-ng5ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676c599a15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.825 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.825 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" iface="eth0" netns="" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.825 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.825 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.844 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.844 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.845 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.857 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.857 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" HandleID="k8s-pod-network.a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--kube--controllers--6685777db6--ng5ph-eth0" Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.859 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.862527 containerd[1476]: 2024-12-13 02:01:08.860 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e" Dec 13 02:01:08.862527 containerd[1476]: time="2024-12-13T02:01:08.862482681Z" level=info msg="TearDown network for sandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" successfully" Dec 13 02:01:08.871985 containerd[1476]: time="2024-12-13T02:01:08.871730720Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:08.871985 containerd[1476]: time="2024-12-13T02:01:08.871859363Z" level=info msg="RemovePodSandbox \"a71be1b903306d3a436ebf5b1a7851be2c692513e0d44eb096bbf2cca75aa08e\" returns successfully" Dec 13 02:01:08.872553 containerd[1476]: time="2024-12-13T02:01:08.872382014Z" level=info msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.913 [WARNING][5285] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7522399d-902f-4e35-8d1b-67a7dba93ba9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf", Pod:"calico-apiserver-f8df5958f-fzbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f0634065cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.913 [INFO][5285] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.913 [INFO][5285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" iface="eth0" netns="" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.913 [INFO][5285] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.913 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.935 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.935 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.935 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.951 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.951 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.953 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:08.957130 containerd[1476]: 2024-12-13 02:01:08.955 [INFO][5285] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:08.958303 containerd[1476]: time="2024-12-13T02:01:08.958086865Z" level=info msg="TearDown network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" successfully" Dec 13 02:01:08.958303 containerd[1476]: time="2024-12-13T02:01:08.958143386Z" level=info msg="StopPodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" returns successfully" Dec 13 02:01:08.958700 containerd[1476]: time="2024-12-13T02:01:08.958669637Z" level=info msg="RemovePodSandbox for \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" Dec 13 02:01:08.958777 containerd[1476]: time="2024-12-13T02:01:08.958706038Z" level=info msg="Forcibly stopping sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\"" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:08.997 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0", GenerateName:"calico-apiserver-f8df5958f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7522399d-902f-4e35-8d1b-67a7dba93ba9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8df5958f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-277531bf34", ContainerID:"87536ba2502623f4e15276c4d6f326407e95356945268644b5518207a2294cdf", Pod:"calico-apiserver-f8df5958f-fzbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f0634065cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:08.997 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:08.997 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" iface="eth0" netns="" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:08.997 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:08.997 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.028 [INFO][5316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.028 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.029 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.041 [WARNING][5316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.041 [INFO][5316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" HandleID="k8s-pod-network.6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Workload="ci--4081--2--1--4--277531bf34-k8s-calico--apiserver--f8df5958f--fzbtl-eth0" Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.043 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:01:09.053107 containerd[1476]: 2024-12-13 02:01:09.047 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52" Dec 13 02:01:09.061302 systemd[1]: run-containerd-runc-k8s.io-547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a-runc.OTC27a.mount: Deactivated successfully. Dec 13 02:01:09.078607 containerd[1476]: time="2024-12-13T02:01:09.077986664Z" level=info msg="TearDown network for sandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" successfully" Dec 13 02:01:09.082423 containerd[1476]: time="2024-12-13T02:01:09.082241557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:01:09.082423 containerd[1476]: time="2024-12-13T02:01:09.082318318Z" level=info msg="RemovePodSandbox \"6e3fadc80c29fcea7d3b59e8e5ea08f4381d2951001e90d41dc18903da3c4c52\" returns successfully" Dec 13 02:01:14.907326 update_engine[1451]: I20241213 02:01:14.907205 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:01:14.907801 update_engine[1451]: I20241213 02:01:14.907560 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:01:14.907881 update_engine[1451]: I20241213 02:01:14.907838 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:01:14.908778 update_engine[1451]: E20241213 02:01:14.908701 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:01:14.908915 update_engine[1451]: I20241213 02:01:14.908792 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 02:01:24.906981 update_engine[1451]: I20241213 02:01:24.906858 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:01:24.907658 update_engine[1451]: I20241213 02:01:24.907224 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:01:24.907658 update_engine[1451]: I20241213 02:01:24.907510 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:01:24.908489 update_engine[1451]: E20241213 02:01:24.908424 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:01:24.908568 update_engine[1451]: I20241213 02:01:24.908516 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 02:01:31.708999 kubelet[2674]: I1213 02:01:31.708542 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:01:31.744730 kubelet[2674]: I1213 02:01:31.744649 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mb9nr" podStartSLOduration=58.807489602 podStartE2EDuration="1m7.744632868s" podCreationTimestamp="2024-12-13 02:00:24 +0000 UTC" firstStartedPulling="2024-12-13 02:00:52.247454526 +0000 UTC m=+44.515020756" lastFinishedPulling="2024-12-13 02:01:01.184597832 +0000 UTC m=+53.452164022" observedRunningTime="2024-12-13 02:01:02.237160851 +0000 UTC m=+54.504727081" watchObservedRunningTime="2024-12-13 02:01:31.744632868 +0000 UTC m=+84.012199098" Dec 13 02:01:33.566816 kubelet[2674]: I1213 02:01:33.566749 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:01:34.907721 update_engine[1451]: I20241213 02:01:34.907608 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:01:34.908398 update_engine[1451]: I20241213 02:01:34.907950 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:01:34.908398 update_engine[1451]: I20241213 02:01:34.908280 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:01:34.909160 update_engine[1451]: E20241213 02:01:34.909094 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:01:34.909334 update_engine[1451]: I20241213 02:01:34.909174 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:01:34.909334 update_engine[1451]: I20241213 02:01:34.909190 1451 omaha_request_action.cc:617] Omaha request response: Dec 13 02:01:34.909334 update_engine[1451]: E20241213 02:01:34.909304 1451 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909345 1451 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909356 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909364 1451 update_attempter.cc:306] Processing Done. Dec 13 02:01:34.909474 update_engine[1451]: E20241213 02:01:34.909384 1451 update_attempter.cc:619] Update failed. Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909391 1451 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909399 1451 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 02:01:34.909474 update_engine[1451]: I20241213 02:01:34.909408 1451 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.909499 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.909528 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.909538 1451 omaha_request_action.cc:272] Request: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.909546 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.909862 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:01:34.910289 update_engine[1451]: I20241213 02:01:34.910224 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:01:34.910793 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 02:01:34.911200 update_engine[1451]: E20241213 02:01:34.911087 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911154 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911165 1451 omaha_request_action.cc:617] Omaha request response: Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911175 1451 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911183 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911191 1451 update_attempter.cc:306] Processing Done. Dec 13 02:01:34.911200 update_engine[1451]: I20241213 02:01:34.911201 1451 update_attempter.cc:310] Error event sent. Dec 13 02:01:34.911690 update_engine[1451]: I20241213 02:01:34.911214 1451 update_check_scheduler.cc:74] Next update check in 46m8s Dec 13 02:01:34.911943 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 02:02:25.951383 systemd[1]: run-containerd-runc-k8s.io-744b29f874aefb501776e2d3d11388fb6e672902e879f5f1f9995d842fe1a6c5-runc.VB0stm.mount: Deactivated successfully. Dec 13 02:04:39.056935 systemd[1]: run-containerd-runc-k8s.io-547022751d849795a2151400b1621c995ed0cc67af8e16548cb00cf92339f55a-runc.YNFX4z.mount: Deactivated successfully. Dec 13 02:04:58.708377 systemd[1]: Started sshd@7-168.119.247.250:22-147.75.109.163:54974.service - OpenSSH per-connection server daemon (147.75.109.163:54974). Dec 13 02:04:59.695147 sshd[5809]: Accepted publickey for core from 147.75.109.163 port 54974 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:04:59.696374 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:59.703644 systemd-logind[1450]: New session 8 of user core. Dec 13 02:04:59.709237 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:05:00.467930 sshd[5809]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:00.474658 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:05:00.475109 systemd[1]: sshd@7-168.119.247.250:22-147.75.109.163:54974.service: Deactivated successfully. Dec 13 02:05:00.479676 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:05:00.483914 systemd-logind[1450]: Removed session 8. Dec 13 02:05:05.648352 systemd[1]: Started sshd@8-168.119.247.250:22-147.75.109.163:54976.service - OpenSSH per-connection server daemon (147.75.109.163:54976). Dec 13 02:05:06.656192 sshd[5824]: Accepted publickey for core from 147.75.109.163 port 54976 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:06.658299 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:06.665112 systemd-logind[1450]: New session 9 of user core. Dec 13 02:05:06.670266 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:05:07.416362 sshd[5824]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:07.422210 systemd[1]: sshd@8-168.119.247.250:22-147.75.109.163:54976.service: Deactivated successfully. Dec 13 02:05:07.426412 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:05:07.428458 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:05:07.429872 systemd-logind[1450]: Removed session 9. Dec 13 02:05:12.597721 systemd[1]: Started sshd@9-168.119.247.250:22-147.75.109.163:33036.service - OpenSSH per-connection server daemon (147.75.109.163:33036). Dec 13 02:05:13.583249 sshd[5877]: Accepted publickey for core from 147.75.109.163 port 33036 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:13.585394 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:13.593082 systemd-logind[1450]: New session 10 of user core. Dec 13 02:05:13.601225 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:05:14.358262 sshd[5877]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:14.364227 systemd[1]: sshd@9-168.119.247.250:22-147.75.109.163:33036.service: Deactivated successfully. Dec 13 02:05:14.366974 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:05:14.368409 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:05:14.369626 systemd-logind[1450]: Removed session 10. Dec 13 02:05:14.533339 systemd[1]: Started sshd@10-168.119.247.250:22-147.75.109.163:33040.service - OpenSSH per-connection server daemon (147.75.109.163:33040). Dec 13 02:05:15.515950 sshd[5891]: Accepted publickey for core from 147.75.109.163 port 33040 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:15.518626 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:15.525184 systemd-logind[1450]: New session 11 of user core. Dec 13 02:05:15.535366 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:05:16.320331 sshd[5891]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:16.326309 systemd[1]: sshd@10-168.119.247.250:22-147.75.109.163:33040.service: Deactivated successfully. Dec 13 02:05:16.329834 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:05:16.332127 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:05:16.333723 systemd-logind[1450]: Removed session 11. Dec 13 02:05:16.502532 systemd[1]: Started sshd@11-168.119.247.250:22-147.75.109.163:34768.service - OpenSSH per-connection server daemon (147.75.109.163:34768). Dec 13 02:05:17.480843 sshd[5907]: Accepted publickey for core from 147.75.109.163 port 34768 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:17.483053 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:17.492340 systemd-logind[1450]: New session 12 of user core. Dec 13 02:05:17.498298 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:05:18.260294 sshd[5907]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:18.266321 systemd[1]: sshd@11-168.119.247.250:22-147.75.109.163:34768.service: Deactivated successfully. Dec 13 02:05:18.271604 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:05:18.275255 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:05:18.276594 systemd-logind[1450]: Removed session 12. Dec 13 02:05:23.436472 systemd[1]: Started sshd@12-168.119.247.250:22-147.75.109.163:34780.service - OpenSSH per-connection server daemon (147.75.109.163:34780). Dec 13 02:05:24.425082 sshd[5920]: Accepted publickey for core from 147.75.109.163 port 34780 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:24.425817 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:24.434111 systemd-logind[1450]: New session 13 of user core. Dec 13 02:05:24.440477 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:05:25.190814 sshd[5920]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:25.197478 systemd[1]: sshd@12-168.119.247.250:22-147.75.109.163:34780.service: Deactivated successfully. Dec 13 02:05:25.202495 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:05:25.207269 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:05:25.208742 systemd-logind[1450]: Removed session 13. Dec 13 02:05:25.372491 systemd[1]: Started sshd@13-168.119.247.250:22-147.75.109.163:34786.service - OpenSSH per-connection server daemon (147.75.109.163:34786). Dec 13 02:05:26.359853 sshd[5938]: Accepted publickey for core from 147.75.109.163 port 34786 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:26.362268 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:26.368266 systemd-logind[1450]: New session 14 of user core. Dec 13 02:05:26.380413 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:05:27.347890 sshd[5938]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:27.353435 systemd[1]: sshd@13-168.119.247.250:22-147.75.109.163:34786.service: Deactivated successfully. Dec 13 02:05:27.356303 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:05:27.359393 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:05:27.360659 systemd-logind[1450]: Removed session 14. Dec 13 02:05:27.519467 systemd[1]: Started sshd@14-168.119.247.250:22-147.75.109.163:57900.service - OpenSSH per-connection server daemon (147.75.109.163:57900). Dec 13 02:05:28.493974 sshd[5970]: Accepted publickey for core from 147.75.109.163 port 57900 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:28.495965 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:28.503312 systemd-logind[1450]: New session 15 of user core. Dec 13 02:05:28.509367 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:05:31.136098 sshd[5970]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:31.141690 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:05:31.141976 systemd[1]: sshd@14-168.119.247.250:22-147.75.109.163:57900.service: Deactivated successfully. Dec 13 02:05:31.145495 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:05:31.146828 systemd-logind[1450]: Removed session 15. Dec 13 02:05:31.309785 systemd[1]: Started sshd@15-168.119.247.250:22-147.75.109.163:57914.service - OpenSSH per-connection server daemon (147.75.109.163:57914). Dec 13 02:05:32.303139 sshd[6000]: Accepted publickey for core from 147.75.109.163 port 57914 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:32.305596 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:32.313272 systemd-logind[1450]: New session 16 of user core. Dec 13 02:05:32.318273 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:05:33.220954 sshd[6000]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:33.230922 systemd[1]: sshd@15-168.119.247.250:22-147.75.109.163:57914.service: Deactivated successfully. Dec 13 02:05:33.235825 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:05:33.237834 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:05:33.239253 systemd-logind[1450]: Removed session 16. Dec 13 02:05:33.400738 systemd[1]: Started sshd@16-168.119.247.250:22-147.75.109.163:57916.service - OpenSSH per-connection server daemon (147.75.109.163:57916). Dec 13 02:05:34.393058 sshd[6012]: Accepted publickey for core from 147.75.109.163 port 57916 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:34.396517 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:34.404532 systemd-logind[1450]: New session 17 of user core. Dec 13 02:05:34.411724 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:05:35.149570 sshd[6012]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:35.155961 systemd[1]: sshd@16-168.119.247.250:22-147.75.109.163:57916.service: Deactivated successfully. Dec 13 02:05:35.160930 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:05:35.162300 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:05:35.163605 systemd-logind[1450]: Removed session 17. Dec 13 02:05:40.327704 systemd[1]: Started sshd@17-168.119.247.250:22-147.75.109.163:45474.service - OpenSSH per-connection server daemon (147.75.109.163:45474). Dec 13 02:05:41.324913 sshd[6048]: Accepted publickey for core from 147.75.109.163 port 45474 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:41.327498 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:41.335732 systemd-logind[1450]: New session 18 of user core. Dec 13 02:05:41.346319 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:05:42.083161 sshd[6048]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:42.086220 systemd[1]: sshd@17-168.119.247.250:22-147.75.109.163:45474.service: Deactivated successfully. Dec 13 02:05:42.088318 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:05:42.090182 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:05:42.091690 systemd-logind[1450]: Removed session 18. Dec 13 02:05:47.262641 systemd[1]: Started sshd@18-168.119.247.250:22-147.75.109.163:32960.service - OpenSSH per-connection server daemon (147.75.109.163:32960). Dec 13 02:05:48.244305 sshd[6063]: Accepted publickey for core from 147.75.109.163 port 32960 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:05:48.246559 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:48.252583 systemd-logind[1450]: New session 19 of user core. Dec 13 02:05:48.261300 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:05:49.007342 sshd[6063]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:49.015154 systemd[1]: sshd@18-168.119.247.250:22-147.75.109.163:32960.service: Deactivated successfully. Dec 13 02:05:49.020836 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:05:49.024739 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:05:49.025920 systemd-logind[1450]: Removed session 19.