Jan 30 14:15:52.933224 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:15:52.933297 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:15:52.933310 kernel: KASLR enabled Jan 30 14:15:52.933317 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 30 14:15:52.933324 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 30 14:15:52.933331 kernel: random: crng init done Jan 30 14:15:52.933339 kernel: ACPI: Early table checksum verification disabled Jan 30 14:15:52.933347 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 30 14:15:52.933354 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 14:15:52.933364 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933371 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933378 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933385 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933393 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933402 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933412 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933420 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933428 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:15:52.933435 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:15:52.933443 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 14:15:52.933451 kernel: NUMA: Failed to initialise from firmware Jan 30 14:15:52.933459 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:15:52.933467 kernel: NUMA: NODE_DATA [mem 0x139672800-0x139677fff] Jan 30 14:15:52.933474 kernel: Zone ranges: Jan 30 14:15:52.933482 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:15:52.933491 kernel: DMA32 empty Jan 30 14:15:52.933500 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 14:15:52.933508 kernel: Movable zone start for each node Jan 30 14:15:52.933515 kernel: Early memory node ranges Jan 30 14:15:52.933523 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 30 14:15:52.933530 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 30 14:15:52.933538 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 30 14:15:52.933545 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 30 14:15:52.933553 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 30 14:15:52.933560 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 30 14:15:52.933568 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 30 14:15:52.933576 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:15:52.933585 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 14:15:52.933593 kernel: psci: probing for conduit method from ACPI. Jan 30 14:15:52.933600 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:15:52.933611 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:15:52.933620 kernel: psci: Trusted OS migration not required Jan 30 14:15:52.933628 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:15:52.933638 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 14:15:52.933646 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:15:52.933654 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:15:52.933663 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:15:52.933671 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:15:52.933679 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:15:52.933690 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:15:52.933698 kernel: CPU features: detected: Spectre-v4 Jan 30 14:15:52.933706 kernel: CPU features: detected: Spectre-BHB Jan 30 14:15:52.933715 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:15:52.933725 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:15:52.933733 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:15:52.933741 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:15:52.933749 kernel: alternatives: applying boot alternatives Jan 30 14:15:52.933759 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:15:52.933767 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:15:52.933775 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:15:52.933784 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:15:52.933792 kernel: Fallback order for Node 0: 0 Jan 30 14:15:52.933800 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 14:15:52.933808 kernel: Policy zone: Normal Jan 30 14:15:52.933818 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:15:52.933828 kernel: software IO TLB: area num 2. Jan 30 14:15:52.933838 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 14:15:52.933850 kernel: Memory: 3882948K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213052K reserved, 0K cma-reserved) Jan 30 14:15:52.933860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:15:52.933869 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:15:52.933880 kernel: rcu: RCU event tracing is enabled. Jan 30 14:15:52.933891 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:15:52.933901 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:15:52.933910 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:15:52.933921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:15:52.933934 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:15:52.934019 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:15:52.934028 kernel: GICv3: 256 SPIs implemented Jan 30 14:15:52.934040 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:15:52.934049 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:15:52.934059 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:15:52.934069 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 14:15:52.934078 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 14:15:52.934088 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:15:52.934098 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:15:52.934108 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 14:15:52.934117 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 14:15:52.934133 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:15:52.934144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:15:52.934153 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:15:52.934163 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:15:52.934171 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:15:52.934179 kernel: Console: colour dummy device 80x25 Jan 30 14:15:52.934189 kernel: ACPI: Core revision 20230628 Jan 30 14:15:52.934198 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:15:52.934206 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:15:52.934215 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:15:52.934224 kernel: landlock: Up and running. Jan 30 14:15:52.934233 kernel: SELinux: Initializing. Jan 30 14:15:52.934241 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:15:52.934265 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:15:52.936379 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:15:52.936493 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:15:52.936502 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:15:52.936511 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:15:52.936519 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 14:15:52.936537 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 14:15:52.936545 kernel: Remapping and enabling EFI services. Jan 30 14:15:52.936552 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:15:52.936559 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:15:52.936567 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 14:15:52.936575 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 14:15:52.936582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:15:52.936590 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:15:52.936597 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:15:52.936605 kernel: SMP: Total of 2 processors activated. Jan 30 14:15:52.936613 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:15:52.936621 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:15:52.936634 kernel: CPU features: detected: Common not Private translations Jan 30 14:15:52.936643 kernel: CPU features: detected: CRC32 instructions Jan 30 14:15:52.936651 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 14:15:52.936658 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:15:52.936666 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:15:52.936674 kernel: CPU features: detected: Privileged Access Never Jan 30 14:15:52.936681 kernel: CPU features: detected: RAS Extension Support Jan 30 14:15:52.936691 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 14:15:52.936699 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:15:52.936706 kernel: alternatives: applying system-wide alternatives Jan 30 14:15:52.936714 kernel: devtmpfs: initialized Jan 30 14:15:52.936723 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:15:52.936730 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:15:52.936738 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:15:52.936748 kernel: SMBIOS 3.0.0 present. Jan 30 14:15:52.936756 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 14:15:52.936764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:15:52.936771 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:15:52.936779 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:15:52.936787 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:15:52.936795 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:15:52.936802 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 30 14:15:52.936810 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:15:52.936820 kernel: cpuidle: using governor menu Jan 30 14:15:52.936828 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:15:52.936836 kernel: ASID allocator initialised with 32768 entries Jan 30 14:15:52.936844 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:15:52.936852 kernel: Serial: AMBA PL011 UART driver Jan 30 14:15:52.936860 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:15:52.936867 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:15:52.936875 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:15:52.936884 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:15:52.936893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:15:52.936901 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:15:52.936908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:15:52.936916 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:15:52.936924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:15:52.936931 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:15:52.936939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:15:52.936947 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:15:52.936954 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:15:52.936965 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:15:52.936972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:15:52.936980 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:15:52.936988 kernel: ACPI: Interpreter enabled Jan 30 14:15:52.936995 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:15:52.937003 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:15:52.937010 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:15:52.937018 kernel: printk: console [ttyAMA0] enabled Jan 30 14:15:52.937026 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:15:52.939305 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:15:52.939604 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:15:52.939697 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:15:52.939768 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 14:15:52.939838 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 14:15:52.939849 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 14:15:52.939857 kernel: PCI host bridge to bus 0000:00 Jan 30 14:15:52.939944 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 14:15:52.940010 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:15:52.940072 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 14:15:52.940239 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:15:52.942494 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 14:15:52.942612 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 14:15:52.942892 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 14:15:52.942981 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:15:52.943062 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.943132 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 14:15:52.943208 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.944721 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 14:15:52.944829 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.944907 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 14:15:52.945107 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.945186 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 14:15:52.945295 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.945375 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 14:15:52.945452 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.945544 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 14:15:52.945628 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.945705 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 14:15:52.945797 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.945874 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 14:15:52.945952 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:15:52.946027 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 14:15:52.947519 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 14:15:52.947679 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 30 14:15:52.947767 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:15:52.947865 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 14:15:52.947944 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:15:52.948027 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:15:52.948112 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 14:15:52.948182 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 14:15:52.948309 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 14:15:52.948570 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 14:15:52.948661 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 14:15:52.948751 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 14:15:52.948836 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 14:15:52.948919 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 14:15:52.948999 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 14:15:52.949079 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 14:15:52.949189 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 14:15:52.950865 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 14:15:52.951128 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:15:52.951224 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:15:52.954951 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 14:15:52.955088 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 14:15:52.955188 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:15:52.955410 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 14:15:52.955521 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:15:52.955629 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:15:52.955731 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 14:15:52.955824 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 14:15:52.955917 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 14:15:52.956014 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 14:15:52.956110 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:15:52.956201 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:15:52.956356 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 14:15:52.956451 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 14:15:52.956542 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 14:15:52.956641 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 14:15:52.956738 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:15:52.956829 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:15:52.956925 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 14:15:52.957018 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:15:52.957113 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:15:52.957211 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 14:15:52.958431 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:15:52.958527 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:15:52.958602 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 14:15:52.958669 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:15:52.958741 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:15:52.958823 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 14:15:52.958902 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:15:52.958970 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:15:52.959051 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 14:15:52.959121 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:15:52.959220 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 14:15:52.959328 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:15:52.959422 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 14:15:52.959511 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:15:52.959588 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 14:15:52.959657 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:15:52.959733 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 14:15:52.959802 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:15:52.959875 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 14:15:52.959947 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:15:52.960021 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 14:15:52.960090 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:15:52.960162 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 14:15:52.960232 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:15:52.960403 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 14:15:52.960489 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:15:52.960567 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 14:15:52.960638 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 14:15:52.960710 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 14:15:52.960779 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 14:15:52.960862 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 14:15:52.960935 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 14:15:52.961006 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 14:15:52.961079 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 14:15:52.961152 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 14:15:52.961221 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 14:15:52.961945 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 14:15:52.962055 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 14:15:52.962128 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 14:15:52.962195 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 14:15:52.962297 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 14:15:52.962382 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 14:15:52.962455 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 14:15:52.962523 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 14:15:52.962592 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 14:15:52.962657 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 14:15:52.962732 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 14:15:52.962825 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 14:15:52.962902 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:15:52.962974 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 14:15:52.963043 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 14:15:52.963110 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 14:15:52.963175 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 14:15:52.963242 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:15:52.963733 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 14:15:52.963819 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 14:15:52.963903 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 14:15:52.963981 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 14:15:52.964047 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:15:52.964125 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:15:52.964195 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 14:15:52.965414 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 14:15:52.965527 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 14:15:52.965593 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 14:15:52.965660 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:15:52.965740 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:15:52.965822 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 14:15:52.965904 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 14:15:52.965976 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 14:15:52.966064 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:15:52.966141 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 14:15:52.966217 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 14:15:52.967622 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 14:15:52.967719 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 14:15:52.967789 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 14:15:52.967862 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:15:52.967942 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 14:15:52.968029 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 14:15:52.968108 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 14:15:52.968190 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 14:15:52.968353 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 14:15:52.968442 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:15:52.968523 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 14:15:52.968595 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 14:15:52.968667 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 14:15:52.968746 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 14:15:52.968817 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 14:15:52.968889 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 14:15:52.968958 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:15:52.969035 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 14:15:52.969480 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 14:15:52.969563 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 14:15:52.969629 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:15:52.969707 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 14:15:52.969774 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 14:15:52.969838 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 14:15:52.969902 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:15:52.969972 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 14:15:52.970035 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:15:52.970094 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 14:15:52.970168 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 14:15:52.970235 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 14:15:52.970364 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:15:52.970443 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 14:15:52.970511 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 14:15:52.970576 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:15:52.970647 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 14:15:52.970714 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 14:15:52.970785 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:15:52.970872 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:15:52.970939 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 14:15:52.971001 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:15:52.971072 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 14:15:52.971134 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 14:15:52.972389 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:15:52.972556 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 14:15:52.972623 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 14:15:52.972693 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:15:52.972768 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 14:15:52.972829 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 14:15:52.972890 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:15:52.972966 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 14:15:52.973027 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 14:15:52.973088 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:15:52.973160 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 14:15:52.973225 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 14:15:52.974524 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:15:52.974553 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:15:52.974562 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:15:52.974571 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:15:52.974579 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:15:52.974587 kernel: iommu: Default domain type: Translated Jan 30 14:15:52.974596 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:15:52.974611 kernel: efivars: Registered efivars operations Jan 30 14:15:52.974619 kernel: vgaarb: loaded Jan 30 14:15:52.974627 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:15:52.974635 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:15:52.974643 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:15:52.974651 kernel: pnp: PnP ACPI init Jan 30 14:15:52.974737 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 14:15:52.974750 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:15:52.974761 kernel: NET: Registered PF_INET protocol family Jan 30 14:15:52.974769 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:15:52.974778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:15:52.974785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:15:52.974793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:15:52.974801 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:15:52.974809 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:15:52.974817 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:15:52.974825 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:15:52.974835 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:15:52.974915 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 14:15:52.974927 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:15:52.974938 kernel: kvm [1]: HYP mode not available Jan 30 14:15:52.974947 kernel: Initialise system trusted keyrings Jan 30 14:15:52.974955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:15:52.974965 kernel: Key type asymmetric registered Jan 30 14:15:52.974974 kernel: Asymmetric key parser 'x509' registered Jan 30 14:15:52.974982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:15:52.974993 kernel: io scheduler mq-deadline registered Jan 30 14:15:52.975001 kernel: io scheduler kyber registered Jan 30 14:15:52.975009 kernel: io scheduler bfq registered Jan 30 14:15:52.975019 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:15:52.975093 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 14:15:52.975163 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 14:15:52.975233 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.976646 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 14:15:52.976748 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 14:15:52.976815 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.976887 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 14:15:52.976955 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 14:15:52.977022 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.977387 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 14:15:52.977485 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 14:15:52.977558 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.977631 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 14:15:52.977698 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 14:15:52.977765 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.977846 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 14:15:52.977915 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 14:15:52.977982 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.978059 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 14:15:52.978128 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 14:15:52.978196 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.978303 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 14:15:52.978380 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 14:15:52.978446 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.978457 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 14:15:52.978527 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 14:15:52.978595 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 14:15:52.978661 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:15:52.978676 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:15:52.978685 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:15:52.978693 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:15:52.978767 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 14:15:52.978843 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 14:15:52.978855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:15:52.978863 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:15:52.978933 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 14:15:52.978946 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 14:15:52.978954 kernel: thunder_xcv, ver 1.0 Jan 30 14:15:52.978962 kernel: thunder_bgx, ver 1.0 Jan 30 14:15:52.978970 kernel: nicpf, ver 1.0 Jan 30 14:15:52.978977 kernel: nicvf, ver 1.0 Jan 30 14:15:52.979062 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:15:52.979128 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:15:52 UTC (1738246552) Jan 30 14:15:52.979138 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:15:52.979149 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 14:15:52.979157 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:15:52.979165 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:15:52.979173 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:15:52.979181 kernel: Segment Routing with IPv6 Jan 30 14:15:52.979189 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:15:52.979198 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:15:52.979206 kernel: Key type dns_resolver registered Jan 30 14:15:52.979214 kernel: registered taskstats version 1 Jan 30 14:15:52.979224 kernel: Loading compiled-in X.509 certificates Jan 30 14:15:52.979232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:15:52.979240 kernel: Key type .fscrypt registered Jan 30 14:15:52.980108 kernel: Key type fscrypt-provisioning registered Jan 30 14:15:52.980138 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:15:52.980148 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:15:52.980158 kernel: ima: No architecture policies found Jan 30 14:15:52.980168 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:15:52.980176 kernel: clk: Disabling unused clocks Jan 30 14:15:52.980191 kernel: Freeing unused kernel memory: 39360K Jan 30 14:15:52.980200 kernel: Run /init as init process Jan 30 14:15:52.980207 kernel: with arguments: Jan 30 14:15:52.980216 kernel: /init Jan 30 14:15:52.980224 kernel: with environment: Jan 30 14:15:52.980232 kernel: HOME=/ Jan 30 14:15:52.980241 kernel: TERM=linux Jan 30 14:15:52.980317 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:15:52.980332 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:15:52.980350 systemd[1]: Detected virtualization kvm. Jan 30 14:15:52.980359 systemd[1]: Detected architecture arm64. Jan 30 14:15:52.980368 systemd[1]: Running in initrd. Jan 30 14:15:52.980376 systemd[1]: No hostname configured, using default hostname. Jan 30 14:15:52.980384 systemd[1]: Hostname set to . Jan 30 14:15:52.980392 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:15:52.980400 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:15:52.980410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:15:52.980419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:15:52.980428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:15:52.980437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:15:52.980446 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:15:52.980454 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:15:52.980465 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:15:52.980475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:15:52.980483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:15:52.980491 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:15:52.980500 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:15:52.980509 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:15:52.980517 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:15:52.980525 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:15:52.980533 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:15:52.980543 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:15:52.980552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:15:52.980560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:15:52.980569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:15:52.980577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:15:52.980585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:15:52.980594 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:15:52.980602 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:15:52.980611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:15:52.980621 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:15:52.980629 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:15:52.980638 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:15:52.980646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:15:52.980654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:15:52.980663 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:15:52.980710 systemd-journald[235]: Collecting audit messages is disabled. Jan 30 14:15:52.980733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:15:52.980742 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:15:52.980753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:15:52.980762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:15:52.980770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:15:52.980779 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:15:52.980787 kernel: Bridge firewalling registered Jan 30 14:15:52.980795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:15:52.980805 systemd-journald[235]: Journal started Jan 30 14:15:52.980827 systemd-journald[235]: Runtime Journal (/run/log/journal/f62e62ad28e04bd1b7e2ac3b332562c8) is 8.0M, max 76.6M, 68.6M free. Jan 30 14:15:52.948011 systemd-modules-load[236]: Inserted module 'overlay' Jan 30 14:15:52.987367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:15:52.987399 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:15:52.976402 systemd-modules-load[236]: Inserted module 'br_netfilter' Jan 30 14:15:52.992611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:15:52.994870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:15:52.998198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:15:53.016743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:15:53.022530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:15:53.026404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:15:53.028562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:15:53.037570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:15:53.041589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:15:53.054807 dracut-cmdline[269]: dracut-dracut-053 Jan 30 14:15:53.061863 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:15:53.082632 systemd-resolved[272]: Positive Trust Anchors: Jan 30 14:15:53.082653 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:15:53.082686 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:15:53.089563 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 30 14:15:53.090762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:15:53.091829 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:15:53.170392 kernel: SCSI subsystem initialized Jan 30 14:15:53.176315 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:15:53.184338 kernel: iscsi: registered transport (tcp) Jan 30 14:15:53.202339 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:15:53.202411 kernel: QLogic iSCSI HBA Driver Jan 30 14:15:53.253807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:15:53.261464 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:15:53.282323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:15:53.282405 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:15:53.283345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:15:53.337374 kernel: raid6: neonx8 gen() 15448 MB/s Jan 30 14:15:53.354376 kernel: raid6: neonx4 gen() 15216 MB/s Jan 30 14:15:53.371341 kernel: raid6: neonx2 gen() 13025 MB/s Jan 30 14:15:53.389371 kernel: raid6: neonx1 gen() 10224 MB/s Jan 30 14:15:53.405398 kernel: raid6: int64x8 gen() 6887 MB/s Jan 30 14:15:53.422355 kernel: raid6: int64x4 gen() 7236 MB/s Jan 30 14:15:53.439470 kernel: raid6: int64x2 gen() 5661 MB/s Jan 30 14:15:53.456370 kernel: raid6: int64x1 gen() 4645 MB/s Jan 30 14:15:53.456451 kernel: raid6: using algorithm neonx8 gen() 15448 MB/s Jan 30 14:15:53.474146 kernel: raid6: .... xor() 10727 MB/s, rmw enabled Jan 30 14:15:53.474292 kernel: raid6: using neon recovery algorithm Jan 30 14:15:53.481483 kernel: xor: measuring software checksum speed Jan 30 14:15:53.481581 kernel: 8regs : 19750 MB/sec Jan 30 14:15:53.481617 kernel: 32regs : 19533 MB/sec Jan 30 14:15:53.481635 kernel: arm64_neon : 26717 MB/sec Jan 30 14:15:53.482745 kernel: xor: using function: arm64_neon (26717 MB/sec) Jan 30 14:15:53.543300 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:15:53.559767 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:15:53.571768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:15:53.587618 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 30 14:15:53.591348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:15:53.601585 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:15:53.624295 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 30 14:15:53.669594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:15:53.678664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:15:53.732694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:15:53.743842 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:15:53.764713 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:15:53.766395 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:15:53.768571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:15:53.769139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:15:53.778496 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:15:53.801894 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:15:53.843419 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:15:53.848418 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 14:15:53.851345 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 14:15:53.880611 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:15:53.880757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:15:53.884610 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:15:53.885177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:15:53.885403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:15:53.885995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:15:53.894901 kernel: ACPI: bus type USB registered Jan 30 14:15:53.894955 kernel: usbcore: registered new interface driver usbfs Jan 30 14:15:53.894966 kernel: usbcore: registered new interface driver hub Jan 30 14:15:53.894976 kernel: usbcore: registered new device driver usb Jan 30 14:15:53.894585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:15:53.925668 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:15:53.933269 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 14:15:53.933477 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 14:15:53.933565 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:15:53.933647 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 14:15:53.933732 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 14:15:53.933816 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 14:15:53.938758 kernel: hub 1-0:1.0: USB hub found Jan 30 14:15:53.938918 kernel: hub 1-0:1.0: 4 ports detected Jan 30 14:15:53.939011 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 14:15:53.939098 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 14:15:53.939234 kernel: hub 2-0:1.0: USB hub found Jan 30 14:15:53.939835 kernel: hub 2-0:1.0: 4 ports detected Jan 30 14:15:53.939939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:15:53.939952 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 14:15:53.925907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:15:53.939467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:15:53.944508 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 14:15:53.964001 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 14:15:53.964121 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 14:15:53.964216 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 14:15:53.964420 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:15:53.964513 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:15:53.964525 kernel: GPT:17805311 != 80003071 Jan 30 14:15:53.964534 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:15:53.964544 kernel: GPT:17805311 != 80003071 Jan 30 14:15:53.964553 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:15:53.964563 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:15:53.964580 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 14:15:53.960836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:15:54.006290 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (501) Jan 30 14:15:54.014395 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (513) Jan 30 14:15:54.016119 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 14:15:54.022842 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 14:15:54.029139 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 14:15:54.030929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 14:15:54.042796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:15:54.050473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:15:54.057206 disk-uuid[572]: Primary Header is updated. Jan 30 14:15:54.057206 disk-uuid[572]: Secondary Entries is updated. Jan 30 14:15:54.057206 disk-uuid[572]: Secondary Header is updated. Jan 30 14:15:54.077530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:15:54.173345 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 14:15:54.417309 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 14:15:54.554735 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 14:15:54.554797 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 14:15:54.557334 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 14:15:54.611663 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 14:15:54.611890 kernel: usbcore: registered new interface driver usbhid Jan 30 14:15:54.611903 kernel: usbhid: USB HID core driver Jan 30 14:15:55.087109 disk-uuid[573]: The operation has completed successfully. Jan 30 14:15:55.087932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:15:55.144433 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:15:55.144550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:15:55.154564 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:15:55.159638 sh[586]: Success Jan 30 14:15:55.172320 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:15:55.232658 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:15:55.242446 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:15:55.245309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:15:55.272535 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:15:55.272608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:15:55.272625 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:15:55.273691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:15:55.273719 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:15:55.280308 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:15:55.282618 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:15:55.283421 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:15:55.290486 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:15:55.293525 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:15:55.306193 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:15:55.306300 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:15:55.307710 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:15:55.311510 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:15:55.311585 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:15:55.323744 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:15:55.325325 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:15:55.334123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:15:55.341746 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:15:55.443654 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:15:55.445855 ignition[670]: Ignition 2.19.0 Jan 30 14:15:55.445870 ignition[670]: Stage: fetch-offline Jan 30 14:15:55.445906 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:55.445915 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:55.446081 ignition[670]: parsed url from cmdline: "" Jan 30 14:15:55.446090 ignition[670]: no config URL provided Jan 30 14:15:55.446098 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:15:55.446105 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:15:55.446111 ignition[670]: failed to fetch config: resource requires networking Jan 30 14:15:55.446312 ignition[670]: Ignition finished successfully Jan 30 14:15:55.451519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:15:55.454392 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:15:55.473846 systemd-networkd[773]: lo: Link UP Jan 30 14:15:55.473857 systemd-networkd[773]: lo: Gained carrier Jan 30 14:15:55.475555 systemd-networkd[773]: Enumeration completed Jan 30 14:15:55.476018 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:15:55.476021 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:15:55.476628 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:15:55.480191 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:15:55.480195 systemd-networkd[773]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:15:55.480912 systemd-networkd[773]: eth0: Link UP Jan 30 14:15:55.480916 systemd-networkd[773]: eth0: Gained carrier Jan 30 14:15:55.480925 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:15:55.482198 systemd[1]: Reached target network.target - Network. Jan 30 14:15:55.486625 systemd-networkd[773]: eth1: Link UP Jan 30 14:15:55.486628 systemd-networkd[773]: eth1: Gained carrier Jan 30 14:15:55.486639 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:15:55.491131 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:15:55.506223 ignition[776]: Ignition 2.19.0 Jan 30 14:15:55.506237 ignition[776]: Stage: fetch Jan 30 14:15:55.506461 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:55.506472 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:55.506575 ignition[776]: parsed url from cmdline: "" Jan 30 14:15:55.506578 ignition[776]: no config URL provided Jan 30 14:15:55.506583 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:15:55.506591 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:15:55.506611 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 14:15:55.507325 ignition[776]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 14:15:55.517362 systemd-networkd[773]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:15:55.545361 systemd-networkd[773]: eth0: DHCPv4 address 157.90.246.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:15:55.707475 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 14:15:55.715379 ignition[776]: GET result: OK Jan 30 14:15:55.715545 ignition[776]: parsing config with SHA512: 5a88386c81eb36c695ed89db5431f415084146af63e25858ce0b094b9764bbd3a2df67d40575dfeecaf8d4971c761a2dd0984822a1e97f4101dd20e167fb4345 Jan 30 14:15:55.721419 unknown[776]: fetched base config from "system" Jan 30 14:15:55.721430 unknown[776]: fetched base config from "system" Jan 30 14:15:55.721799 ignition[776]: fetch: fetch complete Jan 30 14:15:55.721438 unknown[776]: fetched user config from "hetzner" Jan 30 14:15:55.721803 ignition[776]: fetch: fetch passed Jan 30 14:15:55.723978 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:15:55.721852 ignition[776]: Ignition finished successfully Jan 30 14:15:55.735674 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:15:55.753903 ignition[783]: Ignition 2.19.0 Jan 30 14:15:55.753913 ignition[783]: Stage: kargs Jan 30 14:15:55.754108 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:55.754119 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:55.755111 ignition[783]: kargs: kargs passed Jan 30 14:15:55.755168 ignition[783]: Ignition finished successfully Jan 30 14:15:55.756776 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:15:55.761483 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:15:55.796420 ignition[789]: Ignition 2.19.0 Jan 30 14:15:55.796973 ignition[789]: Stage: disks Jan 30 14:15:55.797223 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:55.797236 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:55.798843 ignition[789]: disks: disks passed Jan 30 14:15:55.798930 ignition[789]: Ignition finished successfully Jan 30 14:15:55.802730 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:15:55.803777 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:15:55.804628 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:15:55.806520 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:15:55.807627 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:15:55.809059 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:15:55.814565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:15:55.839810 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:15:55.843136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:15:55.853466 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:15:55.908294 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:15:55.909699 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:15:55.911682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:15:55.920392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:15:55.923728 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:15:55.931123 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:15:55.935064 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:15:55.936898 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (805) Jan 30 14:15:55.936496 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:15:55.940877 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:15:55.944128 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:15:55.944155 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:15:55.944166 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:15:55.951213 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:15:55.951319 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:15:55.952590 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:15:55.956082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:15:56.006367 coreos-metadata[807]: Jan 30 14:15:56.006 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 14:15:56.006367 coreos-metadata[807]: Jan 30 14:15:56.006 INFO Fetch successful Jan 30 14:15:56.008943 coreos-metadata[807]: Jan 30 14:15:56.008 INFO wrote hostname ci-4081-3-0-2-dd601a010b to /sysroot/etc/hostname Jan 30 14:15:56.009611 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:15:56.015448 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:15:56.021640 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:15:56.027408 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:15:56.033131 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:15:56.154146 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:15:56.161501 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:15:56.165510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:15:56.174351 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:15:56.199325 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:15:56.205663 ignition[924]: INFO : Ignition 2.19.0 Jan 30 14:15:56.205663 ignition[924]: INFO : Stage: mount Jan 30 14:15:56.207525 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:56.207525 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:56.207525 ignition[924]: INFO : mount: mount passed Jan 30 14:15:56.207525 ignition[924]: INFO : Ignition finished successfully Jan 30 14:15:56.209326 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:15:56.219377 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:15:56.272352 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:15:56.278607 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:15:56.291372 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (934) Jan 30 14:15:56.293155 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:15:56.293210 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:15:56.293221 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:15:56.297306 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:15:56.297381 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:15:56.300028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:15:56.322711 ignition[950]: INFO : Ignition 2.19.0 Jan 30 14:15:56.322711 ignition[950]: INFO : Stage: files Jan 30 14:15:56.324176 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:56.324176 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:56.324176 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:15:56.327083 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:15:56.327083 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:15:56.331700 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:15:56.332948 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:15:56.332948 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:15:56.332120 unknown[950]: wrote ssh authorized keys file for user: core Jan 30 14:15:56.335517 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:15:56.335517 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:15:56.414281 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:15:56.585380 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:15:56.585380 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:15:56.588663 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 14:15:56.873500 systemd-networkd[773]: eth1: Gained IPv6LL Jan 30 14:15:56.873979 systemd-networkd[773]: eth0: Gained IPv6LL Jan 30 14:15:57.278081 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:15:57.662988 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:15:57.662988 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:15:57.666339 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:15:57.666339 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:15:57.666339 ignition[950]: INFO : files: files passed Jan 30 14:15:57.666339 ignition[950]: INFO : Ignition finished successfully Jan 30 14:15:57.667372 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:15:57.680570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:15:57.683859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:15:57.685828 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:15:57.686540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:15:57.710694 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:15:57.710694 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:15:57.714219 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:15:57.716343 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:15:57.718117 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:15:57.726549 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:15:57.759733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:15:57.759870 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:15:57.762743 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:15:57.763490 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:15:57.764797 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:15:57.769545 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:15:57.786710 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:15:57.792501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:15:57.808872 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:15:57.809682 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:15:57.810464 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:15:57.811646 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:15:57.811780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:15:57.813487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:15:57.815522 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:15:57.816553 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:15:57.817707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:15:57.818985 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:15:57.820206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:15:57.820931 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:15:57.822136 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:15:57.823407 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:15:57.824777 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:15:57.825808 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:15:57.825941 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:15:57.827277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:15:57.827968 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:15:57.829056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:15:57.832415 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:15:57.833402 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:15:57.833553 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:15:57.835631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:15:57.835770 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:15:57.837133 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:15:57.837293 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:15:57.838526 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:15:57.838633 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:15:57.844612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:15:57.847705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:15:57.848990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:15:57.851855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:15:57.854049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:15:57.857559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:15:57.866229 ignition[1003]: INFO : Ignition 2.19.0 Jan 30 14:15:57.866229 ignition[1003]: INFO : Stage: umount Jan 30 14:15:57.869852 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:15:57.869852 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:15:57.869852 ignition[1003]: INFO : umount: umount passed Jan 30 14:15:57.869852 ignition[1003]: INFO : Ignition finished successfully Jan 30 14:15:57.868742 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:15:57.868845 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:15:57.872228 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:15:57.874442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:15:57.877922 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:15:57.878039 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:15:57.878823 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:15:57.878877 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:15:57.879574 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:15:57.879615 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:15:57.880500 systemd[1]: Stopped target network.target - Network. Jan 30 14:15:57.881331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:15:57.881388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:15:57.883599 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:15:57.887011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:15:57.891425 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:15:57.894923 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:15:57.895654 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:15:57.896362 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:15:57.896410 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:15:57.897923 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:15:57.897966 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:15:57.899079 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:15:57.899138 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:15:57.900776 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:15:57.900836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:15:57.901851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:15:57.903084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:15:57.904828 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:15:57.906221 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:15:57.906371 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:15:57.909794 systemd-networkd[773]: eth0: DHCPv6 lease lost Jan 30 14:15:57.910458 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:15:57.910713 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:15:57.915450 systemd-networkd[773]: eth1: DHCPv6 lease lost Jan 30 14:15:57.918528 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:15:57.918668 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:15:57.920466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:15:57.920571 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:15:57.924422 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:15:57.924644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:15:57.926522 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:15:57.926591 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:15:57.933588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:15:57.934216 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:15:57.934355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:15:57.939931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:15:57.940017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:15:57.941208 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:15:57.941302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:15:57.942923 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:15:57.957190 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:15:57.957455 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:15:57.965312 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:15:57.965601 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:15:57.968055 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:15:57.968115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:15:57.969555 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:15:57.969606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:15:57.971140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:15:57.971196 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:15:57.972834 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:15:57.972886 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:15:57.974566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:15:57.974619 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:15:57.988470 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:15:57.989455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:15:57.989551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:15:57.994109 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:15:57.994180 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:15:57.996454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:15:57.996523 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:15:58.000485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:15:58.000548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:15:58.002630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:15:58.004416 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:15:58.005753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:15:58.013570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:15:58.021902 systemd[1]: Switching root. Jan 30 14:15:58.056930 systemd-journald[235]: Journal stopped Jan 30 14:15:59.162517 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jan 30 14:15:59.162586 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:15:59.162599 kernel: SELinux: policy capability open_perms=1 Jan 30 14:15:59.162609 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:15:59.162622 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:15:59.162632 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:15:59.162645 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:15:59.162655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:15:59.162666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:15:59.162676 kernel: audit: type=1403 audit(1738246558.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:15:59.162687 systemd[1]: Successfully loaded SELinux policy in 37.818ms. Jan 30 14:15:59.162709 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.435ms. Jan 30 14:15:59.162722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:15:59.162733 systemd[1]: Detected virtualization kvm. Jan 30 14:15:59.162745 systemd[1]: Detected architecture arm64. Jan 30 14:15:59.162759 systemd[1]: Detected first boot. Jan 30 14:15:59.162769 systemd[1]: Hostname set to . Jan 30 14:15:59.162779 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:15:59.162790 zram_generator::config[1045]: No configuration found. Jan 30 14:15:59.162802 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:15:59.162815 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:15:59.162829 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:15:59.162839 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:15:59.162851 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:15:59.162864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:15:59.162875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:15:59.162885 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:15:59.162896 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:15:59.162906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:15:59.162919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:15:59.162929 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:15:59.162940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:15:59.162950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:15:59.162961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:15:59.162971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:15:59.162982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:15:59.162993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:15:59.163003 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:15:59.163016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:15:59.163027 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:15:59.163037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:15:59.163048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:15:59.163058 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:15:59.163069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:15:59.163084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:15:59.163096 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:15:59.163106 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:15:59.163117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:15:59.163128 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:15:59.163138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:15:59.163149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:15:59.163160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:15:59.163170 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:15:59.163182 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:15:59.163193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:15:59.163204 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:15:59.163214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:15:59.163225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:15:59.163245 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:15:59.165381 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:15:59.165403 systemd[1]: Reached target machines.target - Containers. Jan 30 14:15:59.165415 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:15:59.165438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:15:59.165451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:15:59.165462 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:15:59.165473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:15:59.165484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:15:59.165497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:15:59.165509 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:15:59.165520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:15:59.165531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:15:59.165543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:15:59.165558 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:15:59.165570 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:15:59.165581 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:15:59.165591 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:15:59.165603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:15:59.165614 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:15:59.165625 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:15:59.165636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:15:59.165647 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:15:59.165657 systemd[1]: Stopped verity-setup.service. Jan 30 14:15:59.165668 kernel: loop: module loaded Jan 30 14:15:59.165683 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:15:59.165694 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:15:59.165707 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:15:59.165717 kernel: fuse: init (API version 7.39) Jan 30 14:15:59.165727 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:15:59.165739 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:15:59.165749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:15:59.165762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:15:59.165773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:15:59.165783 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:15:59.165795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:15:59.165806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:15:59.165819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:15:59.165831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:15:59.165843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:15:59.165854 kernel: ACPI: bus type drm_connector registered Jan 30 14:15:59.165864 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:15:59.165875 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:15:59.165886 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:15:59.165897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:15:59.165907 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:15:59.165920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:15:59.165930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:15:59.165943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:15:59.165954 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:15:59.165966 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:15:59.165976 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:15:59.165987 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:15:59.165998 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:15:59.166011 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:15:59.166070 systemd-journald[1115]: Collecting audit messages is disabled. Jan 30 14:15:59.166099 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:15:59.166111 systemd-journald[1115]: Journal started Jan 30 14:15:59.166136 systemd-journald[1115]: Runtime Journal (/run/log/journal/f62e62ad28e04bd1b7e2ac3b332562c8) is 8.0M, max 76.6M, 68.6M free. Jan 30 14:15:58.817686 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:15:58.844981 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:15:58.845448 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:15:59.173749 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:15:59.176400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:15:59.191348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:15:59.194294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:15:59.202350 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:15:59.202438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:15:59.213265 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:15:59.224199 kernel: loop0: detected capacity change from 0 to 8 Jan 30 14:15:59.224301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:15:59.227786 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:15:59.231800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:15:59.231149 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:15:59.239537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:15:59.240704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:15:59.242042 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:15:59.244334 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:15:59.247820 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:15:59.264303 kernel: loop1: detected capacity change from 0 to 189592 Jan 30 14:15:59.277059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:15:59.289483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:15:59.301519 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:15:59.310959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:15:59.326595 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:15:59.335615 systemd-tmpfiles[1141]: ACLs are not supported, ignoring. Jan 30 14:15:59.335640 systemd-tmpfiles[1141]: ACLs are not supported, ignoring. Jan 30 14:15:59.350164 systemd-journald[1115]: Time spent on flushing to /var/log/journal/f62e62ad28e04bd1b7e2ac3b332562c8 is 61.252ms for 1137 entries. Jan 30 14:15:59.350164 systemd-journald[1115]: System Journal (/var/log/journal/f62e62ad28e04bd1b7e2ac3b332562c8) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:15:59.418213 systemd-journald[1115]: Received client request to flush runtime journal. Jan 30 14:15:59.418281 kernel: loop2: detected capacity change from 0 to 114328 Jan 30 14:15:59.351700 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:15:59.354553 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:15:59.368606 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:15:59.385703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:15:59.397632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:15:59.402725 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:15:59.425295 kernel: loop3: detected capacity change from 0 to 114432 Jan 30 14:15:59.427334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:15:59.459619 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:15:59.467798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:15:59.476905 kernel: loop4: detected capacity change from 0 to 8 Jan 30 14:15:59.482329 kernel: loop5: detected capacity change from 0 to 189592 Jan 30 14:15:59.490765 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 30 14:15:59.491115 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 30 14:15:59.500422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:15:59.506296 kernel: loop6: detected capacity change from 0 to 114328 Jan 30 14:15:59.515309 kernel: loop7: detected capacity change from 0 to 114432 Jan 30 14:15:59.529916 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 14:15:59.531334 (sd-merge)[1187]: Merged extensions into '/usr'. Jan 30 14:15:59.539894 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:15:59.539917 systemd[1]: Reloading... Jan 30 14:15:59.692330 zram_generator::config[1216]: No configuration found. Jan 30 14:15:59.857376 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:15:59.871672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:15:59.919563 systemd[1]: Reloading finished in 378 ms. Jan 30 14:15:59.956965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:15:59.961762 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:15:59.973673 systemd[1]: Starting ensure-sysext.service... Jan 30 14:15:59.977574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:15:59.992455 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:15:59.992483 systemd[1]: Reloading... Jan 30 14:16:00.013920 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:16:00.014180 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:16:00.015017 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:16:00.015413 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 30 14:16:00.015480 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 30 14:16:00.020097 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:16:00.020112 systemd-tmpfiles[1254]: Skipping /boot Jan 30 14:16:00.032393 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:16:00.032411 systemd-tmpfiles[1254]: Skipping /boot Jan 30 14:16:00.085302 zram_generator::config[1283]: No configuration found. Jan 30 14:16:00.180938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:16:00.229341 systemd[1]: Reloading finished in 236 ms. Jan 30 14:16:00.248086 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:16:00.256020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:16:00.269629 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:16:00.281220 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:16:00.289747 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:16:00.299729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:16:00.304593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:16:00.309988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:16:00.313853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:16:00.319732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:16:00.323645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:16:00.340469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:16:00.341985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:16:00.348635 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:16:00.351309 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:16:00.351566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:16:00.358007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:16:00.368280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:16:00.370429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:16:00.372864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:16:00.373487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:16:00.375729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:16:00.376576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:16:00.379340 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:16:00.380078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:16:00.381969 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:16:00.389765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:16:00.395767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:16:00.400677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:16:00.408796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:16:00.421702 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:16:00.422885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:16:00.430767 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:16:00.438089 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 30 14:16:00.446652 systemd[1]: Finished ensure-sysext.service. Jan 30 14:16:00.447712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:16:00.448999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:16:00.449160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:16:00.460813 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:16:00.462799 augenrules[1358]: No rules Jan 30 14:16:00.465713 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:16:00.482373 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:16:00.482603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:16:00.483970 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:16:00.485558 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:16:00.485725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:16:00.487390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:16:00.487594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:16:00.488988 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:16:00.491336 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:16:00.495469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:16:00.509592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:16:00.511332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:16:00.511465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:16:00.511525 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:16:00.598867 systemd-resolved[1329]: Positive Trust Anchors: Jan 30 14:16:00.599187 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:16:00.599337 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:16:00.604622 systemd-resolved[1329]: Using system hostname 'ci-4081-3-0-2-dd601a010b'. Jan 30 14:16:00.607922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:16:00.612074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:16:00.653060 systemd-networkd[1379]: lo: Link UP Jan 30 14:16:00.653073 systemd-networkd[1379]: lo: Gained carrier Jan 30 14:16:00.654727 systemd-networkd[1379]: Enumeration completed Jan 30 14:16:00.654863 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:16:00.655763 systemd[1]: Reached target network.target - Network. Jan 30 14:16:00.666858 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:16:00.671056 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:16:00.671946 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:16:00.681841 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 14:16:00.808293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1378) Jan 30 14:16:00.809337 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:16:00.809344 systemd-networkd[1379]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:16:00.811913 systemd-networkd[1379]: eth1: Link UP Jan 30 14:16:00.811928 systemd-networkd[1379]: eth1: Gained carrier Jan 30 14:16:00.811949 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:16:00.818527 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:16:00.818544 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:16:00.819551 systemd-networkd[1379]: eth0: Link UP Jan 30 14:16:00.819560 systemd-networkd[1379]: eth0: Gained carrier Jan 30 14:16:00.819581 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:16:00.840439 systemd-networkd[1379]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:16:00.841706 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Jan 30 14:16:00.873623 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:16:00.875668 systemd-networkd[1379]: eth0: DHCPv4 address 157.90.246.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:16:00.876970 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Jan 30 14:16:00.881527 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 14:16:00.881675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:16:00.888512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:16:00.892818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:16:00.898539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:16:00.899747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:16:00.899793 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:16:00.902011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:16:00.907558 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:16:00.929522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:16:00.929723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:16:00.938182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:16:00.939524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:16:00.941389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:16:00.946180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:16:00.948395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:16:00.950800 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 14:16:00.950882 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:16:00.950895 kernel: [drm] features: -context_init Jan 30 14:16:00.950757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:16:00.950806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:16:00.959286 kernel: [drm] number of scanouts: 1 Jan 30 14:16:00.959373 kernel: [drm] number of cap sets: 0 Jan 30 14:16:00.965287 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 14:16:00.987141 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:16:00.993220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:16:00.994792 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:16:01.008289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:16:01.008544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:16:01.015552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:16:01.094548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:16:01.148200 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:16:01.156657 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:16:01.182341 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:16:01.214040 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:16:01.215458 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:16:01.216097 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:16:01.216880 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:16:01.217803 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:16:01.218771 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:16:01.219525 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:16:01.220244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:16:01.221043 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:16:01.221116 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:16:01.221782 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:16:01.224213 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:16:01.226895 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:16:01.236904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:16:01.239912 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:16:01.242090 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:16:01.242932 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:16:01.243542 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:16:01.244095 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:16:01.244131 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:16:01.251506 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:16:01.256635 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:16:01.260407 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:16:01.264481 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:16:01.270467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:16:01.273631 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:16:01.275087 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:16:01.280716 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:16:01.286477 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:16:01.290752 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 14:16:01.295504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:16:01.301016 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:16:01.309794 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:16:01.312193 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:16:01.312918 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:16:01.319498 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:16:01.324311 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:16:01.333503 extend-filesystems[1450]: Found loop4 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found loop5 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found loop6 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found loop7 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda1 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda2 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda3 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found usr Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda4 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda6 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda7 Jan 30 14:16:01.333503 extend-filesystems[1450]: Found sda9 Jan 30 14:16:01.333503 extend-filesystems[1450]: Checking size of /dev/sda9 Jan 30 14:16:01.329333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:16:01.361636 dbus-daemon[1448]: [system] SELinux support is enabled Jan 30 14:16:01.375213 jq[1449]: false Jan 30 14:16:01.335784 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:16:01.335965 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:16:01.347408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:16:01.347656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:16:01.364103 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:16:01.368025 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:16:01.369675 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:16:01.380052 jq[1461]: true Jan 30 14:16:01.388844 coreos-metadata[1447]: Jan 30 14:16:01.383 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 14:16:01.391595 coreos-metadata[1447]: Jan 30 14:16:01.391 INFO Fetch successful Jan 30 14:16:01.394163 coreos-metadata[1447]: Jan 30 14:16:01.391 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 14:16:01.394163 coreos-metadata[1447]: Jan 30 14:16:01.392 INFO Fetch successful Jan 30 14:16:01.404465 extend-filesystems[1450]: Resized partition /dev/sda9 Jan 30 14:16:01.409198 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:16:01.410357 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:16:01.413523 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:16:01.413572 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:16:01.423550 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:16:01.450112 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 14:16:01.442970 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:16:01.458573 tar[1478]: linux-arm64/helm Jan 30 14:16:01.467447 jq[1480]: true Jan 30 14:16:01.510990 systemd-logind[1459]: New seat seat0. Jan 30 14:16:01.514122 update_engine[1460]: I20250130 14:16:01.513860 1460 main.cc:92] Flatcar Update Engine starting Jan 30 14:16:01.523047 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:16:01.523082 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 14:16:01.532180 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:16:01.537208 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:16:01.541122 update_engine[1460]: I20250130 14:16:01.538771 1460 update_check_scheduler.cc:74] Next update check in 9m1s Jan 30 14:16:01.542888 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1393) Jan 30 14:16:01.544368 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:16:01.571871 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:16:01.572952 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:16:01.667554 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:16:01.670322 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:16:01.681691 systemd[1]: Starting sshkeys.service... Jan 30 14:16:01.704622 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 14:16:01.717199 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:16:01.732651 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:16:01.739509 extend-filesystems[1488]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 14:16:01.739509 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 14:16:01.739509 extend-filesystems[1488]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 14:16:01.748406 extend-filesystems[1450]: Resized filesystem in /dev/sda9 Jan 30 14:16:01.748406 extend-filesystems[1450]: Found sr0 Jan 30 14:16:01.741138 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:16:01.741383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:16:01.774824 coreos-metadata[1528]: Jan 30 14:16:01.774 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 14:16:01.779271 coreos-metadata[1528]: Jan 30 14:16:01.778 INFO Fetch successful Jan 30 14:16:01.781491 unknown[1528]: wrote ssh authorized keys file for user: core Jan 30 14:16:01.816002 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:16:01.818930 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:16:01.825788 systemd[1]: Finished sshkeys.service. Jan 30 14:16:01.852665 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:16:01.873921 containerd[1487]: time="2025-01-30T14:16:01.873666720Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:16:01.930979 containerd[1487]: time="2025-01-30T14:16:01.930706600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.934044 containerd[1487]: time="2025-01-30T14:16:01.933999000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:16:01.934150 containerd[1487]: time="2025-01-30T14:16:01.934134400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:16:01.934278 containerd[1487]: time="2025-01-30T14:16:01.934205160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:16:01.934531 containerd[1487]: time="2025-01-30T14:16:01.934511080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:16:01.935484 containerd[1487]: time="2025-01-30T14:16:01.935288400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.935484 containerd[1487]: time="2025-01-30T14:16:01.935384200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:16:01.935484 containerd[1487]: time="2025-01-30T14:16:01.935402440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.935752 containerd[1487]: time="2025-01-30T14:16:01.935732560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938283880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938316040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938331400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938456800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938661720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938800080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938815360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938886080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:16:01.939038 containerd[1487]: time="2025-01-30T14:16:01.938926760Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:16:01.946107 containerd[1487]: time="2025-01-30T14:16:01.946032600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:16:01.946325 containerd[1487]: time="2025-01-30T14:16:01.946247840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:16:01.946390 containerd[1487]: time="2025-01-30T14:16:01.946377440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:16:01.946444 containerd[1487]: time="2025-01-30T14:16:01.946432680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:16:01.946527 containerd[1487]: time="2025-01-30T14:16:01.946513880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:16:01.947277 containerd[1487]: time="2025-01-30T14:16:01.946727400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.947589840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948439760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948456840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948471520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948487680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948503560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948516800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948530360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948562760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948575960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948590640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948609080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948628560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.948987 containerd[1487]: time="2025-01-30T14:16:01.948643280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948656760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948672080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948683600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948697160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948710320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948723800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948737400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948751800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948763640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948778160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948801520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948826760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948847960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948860120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.949358 containerd[1487]: time="2025-01-30T14:16:01.948871000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:16:01.950555 containerd[1487]: time="2025-01-30T14:16:01.950524920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950670360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950686600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950698880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950708080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950726240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950737360Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:16:01.952297 containerd[1487]: time="2025-01-30T14:16:01.950747880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:16:01.952474 containerd[1487]: time="2025-01-30T14:16:01.951105080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:16:01.952474 containerd[1487]: time="2025-01-30T14:16:01.951164200Z" level=info msg="Connect containerd service" Jan 30 14:16:01.952474 containerd[1487]: time="2025-01-30T14:16:01.951198880Z" level=info msg="using legacy CRI server" Jan 30 14:16:01.952474 containerd[1487]: time="2025-01-30T14:16:01.951205560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:16:01.957288 containerd[1487]: time="2025-01-30T14:16:01.956561480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.959491160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960206680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960317400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960431720Z" level=info msg="Start subscribing containerd event" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960491600Z" level=info msg="Start recovering state" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960566680Z" level=info msg="Start event monitor" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960579280Z" level=info msg="Start snapshots syncer" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960593120Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:16:01.961429 containerd[1487]: time="2025-01-30T14:16:01.960601360Z" level=info msg="Start streaming server" Jan 30 14:16:01.960825 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:16:01.965831 containerd[1487]: time="2025-01-30T14:16:01.965317120Z" level=info msg="containerd successfully booted in 0.096211s" Jan 30 14:16:02.099228 tar[1478]: linux-arm64/LICENSE Jan 30 14:16:02.099349 tar[1478]: linux-arm64/README.md Jan 30 14:16:02.113314 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:16:02.185504 systemd-networkd[1379]: eth0: Gained IPv6LL Jan 30 14:16:02.187522 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Jan 30 14:16:02.193414 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:16:02.194780 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:16:02.204454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:02.209537 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:16:02.255397 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:16:02.434029 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:16:02.460897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:16:02.470818 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:16:02.478240 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:16:02.478480 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:16:02.487776 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:16:02.500534 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:16:02.509777 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:16:02.519695 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:16:02.520612 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:16:02.569512 systemd-networkd[1379]: eth1: Gained IPv6LL Jan 30 14:16:02.571271 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Jan 30 14:16:02.952633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:02.955479 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:16:02.955890 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:02.961373 systemd[1]: Startup finished in 875ms (kernel) + 5.561s (initrd) + 4.738s (userspace) = 11.175s. Jan 30 14:16:03.534286 kubelet[1579]: E0130 14:16:03.533390 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:03.536810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:03.537346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:13.610173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:16:13.626732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:13.745977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:13.763123 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:13.825937 kubelet[1599]: E0130 14:16:13.825887 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:13.829626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:13.829786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:16.246598 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:16:16.253728 systemd[1]: Started sshd@0-157.90.246.176:22-162.240.226.19:42762.service - OpenSSH per-connection server daemon (162.240.226.19:42762). Jan 30 14:16:17.152195 sshd[1608]: Invalid user yanghui from 162.240.226.19 port 42762 Jan 30 14:16:17.316928 sshd[1608]: Received disconnect from 162.240.226.19 port 42762:11: Bye Bye [preauth] Jan 30 14:16:17.316928 sshd[1608]: Disconnected from invalid user yanghui 162.240.226.19 port 42762 [preauth] Jan 30 14:16:17.319334 systemd[1]: sshd@0-157.90.246.176:22-162.240.226.19:42762.service: Deactivated successfully. Jan 30 14:16:22.048817 systemd[1]: Started sshd@1-157.90.246.176:22-185.146.232.60:41482.service - OpenSSH per-connection server daemon (185.146.232.60:41482). Jan 30 14:16:22.319287 sshd[1613]: Received disconnect from 185.146.232.60 port 41482:11: Bye Bye [preauth] Jan 30 14:16:22.319287 sshd[1613]: Disconnected from authenticating user root 185.146.232.60 port 41482 [preauth] Jan 30 14:16:22.322765 systemd[1]: sshd@1-157.90.246.176:22-185.146.232.60:41482.service: Deactivated successfully. Jan 30 14:16:23.860434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:16:23.866676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:24.008472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:24.016657 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:24.064782 kubelet[1625]: E0130 14:16:24.064706 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:24.068221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:24.068504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:25.627784 systemd[1]: Started sshd@2-157.90.246.176:22-178.128.232.125:54104.service - OpenSSH per-connection server daemon (178.128.232.125:54104). Jan 30 14:16:26.261626 sshd[1633]: Invalid user mosquitto from 178.128.232.125 port 54104 Jan 30 14:16:26.370407 sshd[1633]: Received disconnect from 178.128.232.125 port 54104:11: Bye Bye [preauth] Jan 30 14:16:26.370407 sshd[1633]: Disconnected from invalid user mosquitto 178.128.232.125 port 54104 [preauth] Jan 30 14:16:26.371959 systemd[1]: sshd@2-157.90.246.176:22-178.128.232.125:54104.service: Deactivated successfully. Jan 30 14:16:33.053326 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 30 14:16:33.053646 systemd-timesyncd[1359]: Contacted time server 193.203.3.171:123 (2.flatcar.pool.ntp.org). Jan 30 14:16:33.053734 systemd-timesyncd[1359]: Initial clock synchronization to Thu 2025-01-30 14:16:33.053259 UTC. Jan 30 14:16:34.539100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:16:34.547681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:34.689571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:34.691349 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:34.739180 kubelet[1645]: E0130 14:16:34.739064 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:34.742462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:34.742637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:44.789832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:16:44.798157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:44.956558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:44.956714 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:45.002986 kubelet[1660]: E0130 14:16:45.002915 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:45.004874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:45.005053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:47.261298 update_engine[1460]: I20250130 14:16:47.260691 1460 update_attempter.cc:509] Updating boot flags... Jan 30 14:16:47.314579 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) Jan 30 14:16:47.368260 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1681) Jan 30 14:16:47.432242 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1681) Jan 30 14:16:55.039162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:16:55.044887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:16:55.164497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:16:55.164894 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:16:55.205842 kubelet[1697]: E0130 14:16:55.205774 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:16:55.211324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:16:55.211744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:16:56.886662 systemd[1]: Started sshd@3-157.90.246.176:22-47.84.88.120:41496.service - OpenSSH per-connection server daemon (47.84.88.120:41496). Jan 30 14:16:58.693221 sshd[1705]: Received disconnect from 47.84.88.120 port 41496:11: Bye Bye [preauth] Jan 30 14:16:58.693221 sshd[1705]: Disconnected from authenticating user root 47.84.88.120 port 41496 [preauth] Jan 30 14:16:58.696837 systemd[1]: sshd@3-157.90.246.176:22-47.84.88.120:41496.service: Deactivated successfully. Jan 30 14:17:05.290193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:17:05.296726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:05.413331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:05.427830 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:05.470772 kubelet[1717]: E0130 14:17:05.470686 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:05.472794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:05.472923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:15.539093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:17:15.544502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:15.694674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:15.695604 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:15.741664 kubelet[1732]: E0130 14:17:15.741600 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:15.744317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:15.744518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:25.789309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 14:17:25.795573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:25.937611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:25.937936 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:25.986595 kubelet[1747]: E0130 14:17:25.986532 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:25.988991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:25.989150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:29.343774 systemd[1]: Started sshd@4-157.90.246.176:22-162.240.226.19:40944.service - OpenSSH per-connection server daemon (162.240.226.19:40944). Jan 30 14:17:30.424688 sshd[1755]: Received disconnect from 162.240.226.19 port 40944:11: Bye Bye [preauth] Jan 30 14:17:30.424688 sshd[1755]: Disconnected from authenticating user root 162.240.226.19 port 40944 [preauth] Jan 30 14:17:30.427394 systemd[1]: sshd@4-157.90.246.176:22-162.240.226.19:40944.service: Deactivated successfully. Jan 30 14:17:35.498746 systemd[1]: Started sshd@5-157.90.246.176:22-178.128.232.125:52778.service - OpenSSH per-connection server daemon (178.128.232.125:52778). Jan 30 14:17:36.039180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 14:17:36.047587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:36.146109 sshd[1760]: Invalid user dev from 178.128.232.125 port 52778 Jan 30 14:17:36.200529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:36.203537 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:36.247858 kubelet[1770]: E0130 14:17:36.247810 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:36.250486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:36.250657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:36.255002 sshd[1760]: Received disconnect from 178.128.232.125 port 52778:11: Bye Bye [preauth] Jan 30 14:17:36.255002 sshd[1760]: Disconnected from invalid user dev 178.128.232.125 port 52778 [preauth] Jan 30 14:17:36.256646 systemd[1]: sshd@5-157.90.246.176:22-178.128.232.125:52778.service: Deactivated successfully. Jan 30 14:17:38.256094 systemd[1]: Started sshd@6-157.90.246.176:22-185.146.232.60:45596.service - OpenSSH per-connection server daemon (185.146.232.60:45596). Jan 30 14:17:38.467554 sshd[1780]: Invalid user tzy from 185.146.232.60 port 45596 Jan 30 14:17:38.498115 sshd[1780]: Received disconnect from 185.146.232.60 port 45596:11: Bye Bye [preauth] Jan 30 14:17:38.498115 sshd[1780]: Disconnected from invalid user tzy 185.146.232.60 port 45596 [preauth] Jan 30 14:17:38.500279 systemd[1]: sshd@6-157.90.246.176:22-185.146.232.60:45596.service: Deactivated successfully. Jan 30 14:17:46.289847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 14:17:46.306715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:46.434435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:46.447291 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:46.508939 kubelet[1792]: E0130 14:17:46.508801 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:46.511961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:46.512157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:56.539680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 14:17:56.546566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:17:56.685430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:17:56.689781 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:17:56.735384 kubelet[1807]: E0130 14:17:56.735286 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:17:56.737130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:17:56.737292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:17:57.125529 systemd[1]: Started sshd@7-157.90.246.176:22-139.178.68.195:33104.service - OpenSSH per-connection server daemon (139.178.68.195:33104). Jan 30 14:17:58.120329 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 33104 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:17:58.123671 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:58.133937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:17:58.139627 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:17:58.143317 systemd-logind[1459]: New session 1 of user core. Jan 30 14:17:58.155490 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:17:58.162602 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:17:58.177488 (systemd)[1819]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:17:58.289365 systemd[1819]: Queued start job for default target default.target. Jan 30 14:17:58.305888 systemd[1819]: Created slice app.slice - User Application Slice. Jan 30 14:17:58.305951 systemd[1819]: Reached target paths.target - Paths. Jan 30 14:17:58.305979 systemd[1819]: Reached target timers.target - Timers. Jan 30 14:17:58.308184 systemd[1819]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:17:58.333516 systemd[1819]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:17:58.333648 systemd[1819]: Reached target sockets.target - Sockets. Jan 30 14:17:58.333662 systemd[1819]: Reached target basic.target - Basic System. Jan 30 14:17:58.333717 systemd[1819]: Reached target default.target - Main User Target. Jan 30 14:17:58.333748 systemd[1819]: Startup finished in 148ms. Jan 30 14:17:58.333934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:17:58.341524 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:17:59.034638 systemd[1]: Started sshd@8-157.90.246.176:22-139.178.68.195:33106.service - OpenSSH per-connection server daemon (139.178.68.195:33106). Jan 30 14:18:00.000967 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 33106 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:00.002937 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:00.008296 systemd-logind[1459]: New session 2 of user core. Jan 30 14:18:00.015596 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:18:00.680172 sshd[1830]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:00.687294 systemd[1]: sshd@8-157.90.246.176:22-139.178.68.195:33106.service: Deactivated successfully. Jan 30 14:18:00.690330 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:18:00.691272 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:18:00.692498 systemd-logind[1459]: Removed session 2. Jan 30 14:18:00.858546 systemd[1]: Started sshd@9-157.90.246.176:22-139.178.68.195:33110.service - OpenSSH per-connection server daemon (139.178.68.195:33110). Jan 30 14:18:01.845188 sshd[1837]: Accepted publickey for core from 139.178.68.195 port 33110 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:01.847385 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:01.852890 systemd-logind[1459]: New session 3 of user core. Jan 30 14:18:01.864617 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:18:02.528661 sshd[1837]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:02.533978 systemd[1]: sshd@9-157.90.246.176:22-139.178.68.195:33110.service: Deactivated successfully. Jan 30 14:18:02.536899 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:18:02.539106 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:18:02.540712 systemd-logind[1459]: Removed session 3. Jan 30 14:18:02.702666 systemd[1]: Started sshd@10-157.90.246.176:22-139.178.68.195:33124.service - OpenSSH per-connection server daemon (139.178.68.195:33124). Jan 30 14:18:03.676033 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 33124 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:03.677796 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:03.682627 systemd-logind[1459]: New session 4 of user core. Jan 30 14:18:03.693552 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:18:04.356565 sshd[1844]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:04.360370 systemd[1]: sshd@10-157.90.246.176:22-139.178.68.195:33124.service: Deactivated successfully. Jan 30 14:18:04.362008 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:18:04.363603 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:18:04.365011 systemd-logind[1459]: Removed session 4. Jan 30 14:18:04.530742 systemd[1]: Started sshd@11-157.90.246.176:22-139.178.68.195:33130.service - OpenSSH per-connection server daemon (139.178.68.195:33130). Jan 30 14:18:05.497457 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 33130 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:05.499636 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:05.506238 systemd-logind[1459]: New session 5 of user core. Jan 30 14:18:05.512553 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:18:06.024130 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:18:06.024486 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:18:06.038671 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 30 14:18:06.197628 sshd[1851]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:06.204337 systemd[1]: sshd@11-157.90.246.176:22-139.178.68.195:33130.service: Deactivated successfully. Jan 30 14:18:06.206979 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:18:06.208226 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:18:06.209465 systemd-logind[1459]: Removed session 5. Jan 30 14:18:06.369707 systemd[1]: Started sshd@12-157.90.246.176:22-139.178.68.195:58106.service - OpenSSH per-connection server daemon (139.178.68.195:58106). Jan 30 14:18:06.789347 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 14:18:06.796638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:06.943251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:06.948789 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:18:06.994050 kubelet[1869]: E0130 14:18:06.993906 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:18:06.995668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:18:06.995868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:18:07.339378 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 58106 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:07.342636 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:07.348489 systemd-logind[1459]: New session 6 of user core. Jan 30 14:18:07.350437 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:18:07.858528 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:18:07.858862 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:18:07.863258 sudo[1879]: pam_unix(sudo:session): session closed for user root Jan 30 14:18:07.869572 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:18:07.869853 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:18:07.885707 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:18:07.888860 auditctl[1882]: No rules Jan 30 14:18:07.889365 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:18:07.889617 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:18:07.899664 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:18:07.926857 augenrules[1901]: No rules Jan 30 14:18:07.928284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:18:07.929519 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 30 14:18:08.090032 sshd[1859]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:08.093544 systemd[1]: sshd@12-157.90.246.176:22-139.178.68.195:58106.service: Deactivated successfully. Jan 30 14:18:08.095485 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:18:08.098488 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:18:08.099800 systemd-logind[1459]: Removed session 6. Jan 30 14:18:08.264565 systemd[1]: Started sshd@13-157.90.246.176:22-139.178.68.195:58122.service - OpenSSH per-connection server daemon (139.178.68.195:58122). Jan 30 14:18:09.245583 sshd[1909]: Accepted publickey for core from 139.178.68.195 port 58122 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:09.247377 sshd[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:09.253123 systemd-logind[1459]: New session 7 of user core. Jan 30 14:18:09.260625 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:18:09.769884 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:18:09.770569 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:18:10.065668 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:18:10.078779 (dockerd)[1927]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:18:10.323701 dockerd[1927]: time="2025-01-30T14:18:10.322326537Z" level=info msg="Starting up" Jan 30 14:18:10.400619 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport866180850-merged.mount: Deactivated successfully. Jan 30 14:18:10.436019 dockerd[1927]: time="2025-01-30T14:18:10.435601413Z" level=info msg="Loading containers: start." Jan 30 14:18:10.561223 kernel: Initializing XFRM netlink socket Jan 30 14:18:10.634482 systemd-networkd[1379]: docker0: Link UP Jan 30 14:18:10.653702 dockerd[1927]: time="2025-01-30T14:18:10.653586276Z" level=info msg="Loading containers: done." Jan 30 14:18:10.667543 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2722187070-merged.mount: Deactivated successfully. Jan 30 14:18:10.671971 dockerd[1927]: time="2025-01-30T14:18:10.671780954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:18:10.671971 dockerd[1927]: time="2025-01-30T14:18:10.671948831Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:18:10.672151 dockerd[1927]: time="2025-01-30T14:18:10.672068949Z" level=info msg="Daemon has completed initialization" Jan 30 14:18:10.714239 dockerd[1927]: time="2025-01-30T14:18:10.712958025Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:18:10.714899 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:18:11.766642 containerd[1487]: time="2025-01-30T14:18:11.766553719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 14:18:12.404698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679365098.mount: Deactivated successfully. Jan 30 14:18:13.241879 containerd[1487]: time="2025-01-30T14:18:13.241808882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:13.243587 containerd[1487]: time="2025-01-30T14:18:13.243530174Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618162" Jan 30 14:18:13.244397 containerd[1487]: time="2025-01-30T14:18:13.244343361Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:13.251237 containerd[1487]: time="2025-01-30T14:18:13.248878367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:13.251638 containerd[1487]: time="2025-01-30T14:18:13.251586803Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 1.484986085s" Jan 30 14:18:13.251638 containerd[1487]: time="2025-01-30T14:18:13.251631602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 14:18:13.254084 containerd[1487]: time="2025-01-30T14:18:13.254044283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 14:18:14.405618 containerd[1487]: time="2025-01-30T14:18:14.405544174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:14.407218 containerd[1487]: time="2025-01-30T14:18:14.406901352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469487" Jan 30 14:18:14.408459 containerd[1487]: time="2025-01-30T14:18:14.408412408Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:14.411991 containerd[1487]: time="2025-01-30T14:18:14.411940113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:14.413267 containerd[1487]: time="2025-01-30T14:18:14.413126654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.159041692s" Jan 30 14:18:14.413267 containerd[1487]: time="2025-01-30T14:18:14.413165293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 14:18:14.413845 containerd[1487]: time="2025-01-30T14:18:14.413673445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 14:18:15.499486 containerd[1487]: time="2025-01-30T14:18:15.499392567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:15.500948 containerd[1487]: time="2025-01-30T14:18:15.500653107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024237" Jan 30 14:18:15.501830 containerd[1487]: time="2025-01-30T14:18:15.501767970Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:15.506424 containerd[1487]: time="2025-01-30T14:18:15.506301820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:15.507982 containerd[1487]: time="2025-01-30T14:18:15.507798157Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.094093712s" Jan 30 14:18:15.507982 containerd[1487]: time="2025-01-30T14:18:15.507843397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 14:18:15.508491 containerd[1487]: time="2025-01-30T14:18:15.508446267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 14:18:16.463307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194729682.mount: Deactivated successfully. Jan 30 14:18:16.757109 containerd[1487]: time="2025-01-30T14:18:16.756327024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:16.758010 containerd[1487]: time="2025-01-30T14:18:16.757976880Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772143" Jan 30 14:18:16.758975 containerd[1487]: time="2025-01-30T14:18:16.758923305Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:16.761175 containerd[1487]: time="2025-01-30T14:18:16.761142152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:16.762306 containerd[1487]: time="2025-01-30T14:18:16.762059858Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.253560832s" Jan 30 14:18:16.762306 containerd[1487]: time="2025-01-30T14:18:16.762130897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 14:18:16.763160 containerd[1487]: time="2025-01-30T14:18:16.762813407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:18:17.039290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 14:18:17.045596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:17.238907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:17.249773 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:18:17.300156 kubelet[2144]: E0130 14:18:17.300017 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:18:17.302982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:18:17.303139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:18:17.404292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239655506.mount: Deactivated successfully. Jan 30 14:18:17.952307 containerd[1487]: time="2025-01-30T14:18:17.952260830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:17.954281 containerd[1487]: time="2025-01-30T14:18:17.954163682Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 14:18:17.955463 containerd[1487]: time="2025-01-30T14:18:17.955354145Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:17.957908 containerd[1487]: time="2025-01-30T14:18:17.957855269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:17.961472 containerd[1487]: time="2025-01-30T14:18:17.961408217Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.198446492s" Jan 30 14:18:17.961472 containerd[1487]: time="2025-01-30T14:18:17.961467936Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:18:17.964062 containerd[1487]: time="2025-01-30T14:18:17.963838462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:18:18.497642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198136984.mount: Deactivated successfully. Jan 30 14:18:18.502145 containerd[1487]: time="2025-01-30T14:18:18.502073621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:18.503539 containerd[1487]: time="2025-01-30T14:18:18.503500801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 30 14:18:18.504122 containerd[1487]: time="2025-01-30T14:18:18.503676719Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:18.506241 containerd[1487]: time="2025-01-30T14:18:18.506135604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:18.506861 containerd[1487]: time="2025-01-30T14:18:18.506823354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 542.946293ms" Jan 30 14:18:18.506861 containerd[1487]: time="2025-01-30T14:18:18.506860154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 14:18:18.507472 containerd[1487]: time="2025-01-30T14:18:18.507321547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 14:18:19.112860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237326338.mount: Deactivated successfully. Jan 30 14:18:20.477453 systemd[1]: Started sshd@14-157.90.246.176:22-117.50.209.157:50770.service - OpenSSH per-connection server daemon (117.50.209.157:50770). Jan 30 14:18:20.505530 containerd[1487]: time="2025-01-30T14:18:20.505457001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:20.507056 containerd[1487]: time="2025-01-30T14:18:20.506914141Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 30 14:18:20.508290 containerd[1487]: time="2025-01-30T14:18:20.508239484Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:20.512905 containerd[1487]: time="2025-01-30T14:18:20.512846502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:20.514833 containerd[1487]: time="2025-01-30T14:18:20.514676597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.00732045s" Jan 30 14:18:20.514833 containerd[1487]: time="2025-01-30T14:18:20.514715877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 14:18:25.669555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:25.687665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:25.728256 systemd[1]: Reloading requested from client PID 2281 ('systemctl') (unit session-7.scope)... Jan 30 14:18:25.728275 systemd[1]: Reloading... Jan 30 14:18:25.836230 zram_generator::config[2322]: No configuration found. Jan 30 14:18:25.941930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:18:26.011080 systemd[1]: Reloading finished in 282 ms. Jan 30 14:18:26.059919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:18:26.060009 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:18:26.060329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:26.063460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:26.166601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:26.178187 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:18:26.222268 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:18:26.222268 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:18:26.222268 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:18:26.222268 kubelet[2371]: I0130 14:18:26.220573 2371 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:18:27.398236 kubelet[2371]: I0130 14:18:27.397453 2371 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:18:27.398236 kubelet[2371]: I0130 14:18:27.397498 2371 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:18:27.398236 kubelet[2371]: I0130 14:18:27.398006 2371 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:18:27.430181 kubelet[2371]: E0130 14:18:27.430128 2371 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.90.246.176:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:27.431346 kubelet[2371]: I0130 14:18:27.431188 2371 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:18:27.440381 kubelet[2371]: E0130 14:18:27.440334 2371 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:18:27.440381 kubelet[2371]: I0130 14:18:27.440369 2371 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:18:27.443816 kubelet[2371]: I0130 14:18:27.443777 2371 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:18:27.444908 kubelet[2371]: I0130 14:18:27.444865 2371 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:18:27.445125 kubelet[2371]: I0130 14:18:27.445065 2371 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:18:27.445325 kubelet[2371]: I0130 14:18:27.445099 2371 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-dd601a010b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:18:27.445469 kubelet[2371]: I0130 14:18:27.445437 2371 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:18:27.445469 kubelet[2371]: I0130 14:18:27.445448 2371 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:18:27.445671 kubelet[2371]: I0130 14:18:27.445629 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:18:27.447905 kubelet[2371]: I0130 14:18:27.447837 2371 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:18:27.447905 kubelet[2371]: I0130 14:18:27.447866 2371 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:18:27.447905 kubelet[2371]: I0130 14:18:27.447893 2371 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:18:27.447905 kubelet[2371]: I0130 14:18:27.447902 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:18:27.456545 kubelet[2371]: W0130 14:18:27.456217 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.246.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-dd601a010b&limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:27.456720 kubelet[2371]: E0130 14:18:27.456695 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.90.246.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-dd601a010b&limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:27.456880 kubelet[2371]: I0130 14:18:27.456863 2371 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:18:27.457557 kubelet[2371]: W0130 14:18:27.457507 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.246.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:27.457652 kubelet[2371]: E0130 14:18:27.457557 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.90.246.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:27.459296 kubelet[2371]: I0130 14:18:27.459276 2371 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:18:27.460307 kubelet[2371]: W0130 14:18:27.460289 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:18:27.461120 kubelet[2371]: I0130 14:18:27.461097 2371 server.go:1269] "Started kubelet" Jan 30 14:18:27.461957 kubelet[2371]: I0130 14:18:27.461915 2371 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:18:27.465076 kubelet[2371]: I0130 14:18:27.464958 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:18:27.465548 kubelet[2371]: I0130 14:18:27.465529 2371 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:18:27.466727 kubelet[2371]: I0130 14:18:27.466700 2371 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:18:27.470229 kubelet[2371]: E0130 14:18:27.468250 2371 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.246.176:6443/api/v1/namespaces/default/events\": dial tcp 157.90.246.176:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-dd601a010b.181f7e2bba359ed2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-dd601a010b,UID:ci-4081-3-0-2-dd601a010b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-dd601a010b,},FirstTimestamp:2025-01-30 14:18:27.461070546 +0000 UTC m=+1.276089471,LastTimestamp:2025-01-30 14:18:27.461070546 +0000 UTC m=+1.276089471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-dd601a010b,}" Jan 30 14:18:27.472844 kubelet[2371]: I0130 14:18:27.472820 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:18:27.473421 kubelet[2371]: E0130 14:18:27.473401 2371 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:18:27.473741 kubelet[2371]: I0130 14:18:27.473724 2371 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:18:27.476631 kubelet[2371]: I0130 14:18:27.476613 2371 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:18:27.477070 kubelet[2371]: I0130 14:18:27.477041 2371 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:18:27.477235 kubelet[2371]: I0130 14:18:27.477223 2371 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:18:27.477409 kubelet[2371]: E0130 14:18:27.477383 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-dd601a010b\" not found" Jan 30 14:18:27.478007 kubelet[2371]: E0130 14:18:27.477979 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.246.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-dd601a010b?timeout=10s\": dial tcp 157.90.246.176:6443: connect: connection refused" interval="200ms" Jan 30 14:18:27.478192 kubelet[2371]: W0130 14:18:27.478154 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.246.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:27.478301 kubelet[2371]: E0130 14:18:27.478284 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.90.246.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:27.478532 kubelet[2371]: I0130 14:18:27.478517 2371 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:18:27.478666 kubelet[2371]: I0130 14:18:27.478652 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:18:27.480255 kubelet[2371]: I0130 14:18:27.479922 2371 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:18:27.487814 kubelet[2371]: I0130 14:18:27.487761 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:18:27.489125 kubelet[2371]: I0130 14:18:27.489077 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:18:27.489125 kubelet[2371]: I0130 14:18:27.489105 2371 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:18:27.489125 kubelet[2371]: I0130 14:18:27.489125 2371 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:18:27.489284 kubelet[2371]: E0130 14:18:27.489171 2371 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:18:27.496399 kubelet[2371]: W0130 14:18:27.496329 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.246.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:27.496710 kubelet[2371]: E0130 14:18:27.496403 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.90.246.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:27.505298 kubelet[2371]: I0130 14:18:27.504952 2371 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:18:27.505298 kubelet[2371]: I0130 14:18:27.504972 2371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:18:27.505298 kubelet[2371]: I0130 14:18:27.504990 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:18:27.507215 kubelet[2371]: I0130 14:18:27.507188 2371 policy_none.go:49] "None policy: Start" Jan 30 14:18:27.508115 kubelet[2371]: I0130 14:18:27.508079 2371 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:18:27.508192 kubelet[2371]: I0130 14:18:27.508126 2371 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:18:27.516905 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:18:27.532497 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:18:27.536775 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:18:27.545762 kubelet[2371]: I0130 14:18:27.545719 2371 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:18:27.546631 kubelet[2371]: I0130 14:18:27.546604 2371 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:18:27.546727 kubelet[2371]: I0130 14:18:27.546634 2371 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:18:27.547818 kubelet[2371]: I0130 14:18:27.546981 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:18:27.548935 kubelet[2371]: E0130 14:18:27.548915 2371 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-dd601a010b\" not found" Jan 30 14:18:27.603981 systemd[1]: Created slice kubepods-burstable-podb1b9b17d3e887d78b0d7ee8426772742.slice - libcontainer container kubepods-burstable-podb1b9b17d3e887d78b0d7ee8426772742.slice. Jan 30 14:18:27.618080 systemd[1]: Created slice kubepods-burstable-pod3ac58fa984cac7510ced29453bbd532a.slice - libcontainer container kubepods-burstable-pod3ac58fa984cac7510ced29453bbd532a.slice. Jan 30 14:18:27.631802 systemd[1]: Created slice kubepods-burstable-pod45c20d51d131dc0c5dc121184a74fcbe.slice - libcontainer container kubepods-burstable-pod45c20d51d131dc0c5dc121184a74fcbe.slice. Jan 30 14:18:27.650143 kubelet[2371]: I0130 14:18:27.649970 2371 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.651050 kubelet[2371]: E0130 14:18:27.650968 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.90.246.176:6443/api/v1/nodes\": dial tcp 157.90.246.176:6443: connect: connection refused" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.679363 kubelet[2371]: E0130 14:18:27.679270 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.246.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-dd601a010b?timeout=10s\": dial tcp 157.90.246.176:6443: connect: connection refused" interval="400ms" Jan 30 14:18:27.778132 kubelet[2371]: I0130 14:18:27.777953 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778132 kubelet[2371]: I0130 14:18:27.778039 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45c20d51d131dc0c5dc121184a74fcbe-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-dd601a010b\" (UID: \"45c20d51d131dc0c5dc121184a74fcbe\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778132 kubelet[2371]: I0130 14:18:27.778081 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778132 kubelet[2371]: I0130 14:18:27.778111 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778132 kubelet[2371]: I0130 14:18:27.778140 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778523 kubelet[2371]: I0130 14:18:27.778170 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778523 kubelet[2371]: I0130 14:18:27.778220 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778523 kubelet[2371]: I0130 14:18:27.778249 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.778523 kubelet[2371]: I0130 14:18:27.778275 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.854320 kubelet[2371]: I0130 14:18:27.854276 2371 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.854906 kubelet[2371]: E0130 14:18:27.854744 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.90.246.176:6443/api/v1/nodes\": dial tcp 157.90.246.176:6443: connect: connection refused" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:27.914283 containerd[1487]: time="2025-01-30T14:18:27.914075782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-dd601a010b,Uid:b1b9b17d3e887d78b0d7ee8426772742,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:27.929302 containerd[1487]: time="2025-01-30T14:18:27.929239814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-dd601a010b,Uid:3ac58fa984cac7510ced29453bbd532a,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:27.936997 containerd[1487]: time="2025-01-30T14:18:27.936925048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-dd601a010b,Uid:45c20d51d131dc0c5dc121184a74fcbe,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:28.080249 kubelet[2371]: E0130 14:18:28.079993 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.246.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-dd601a010b?timeout=10s\": dial tcp 157.90.246.176:6443: connect: connection refused" interval="800ms" Jan 30 14:18:28.258771 kubelet[2371]: I0130 14:18:28.258485 2371 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:28.259147 kubelet[2371]: E0130 14:18:28.259101 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.90.246.176:6443/api/v1/nodes\": dial tcp 157.90.246.176:6443: connect: connection refused" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:28.280478 kubelet[2371]: W0130 14:18:28.280375 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.246.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-dd601a010b&limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:28.280478 kubelet[2371]: E0130 14:18:28.280479 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.90.246.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-dd601a010b&limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:28.343307 kubelet[2371]: W0130 14:18:28.343226 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.246.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:28.343307 kubelet[2371]: E0130 14:18:28.343274 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.90.246.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:28.437420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169984402.mount: Deactivated successfully. Jan 30 14:18:28.443790 containerd[1487]: time="2025-01-30T14:18:28.442929860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:18:28.444827 containerd[1487]: time="2025-01-30T14:18:28.444745641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 14:18:28.447167 containerd[1487]: time="2025-01-30T14:18:28.447114055Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:18:28.448816 containerd[1487]: time="2025-01-30T14:18:28.448750837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:18:28.449979 containerd[1487]: time="2025-01-30T14:18:28.449589868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:18:28.451083 containerd[1487]: time="2025-01-30T14:18:28.451049932Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:18:28.453222 containerd[1487]: time="2025-01-30T14:18:28.452053401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:18:28.458862 containerd[1487]: time="2025-01-30T14:18:28.458819688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:18:28.459657 containerd[1487]: time="2025-01-30T14:18:28.459622719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.279186ms" Jan 30 14:18:28.461278 containerd[1487]: time="2025-01-30T14:18:28.461246022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.066401ms" Jan 30 14:18:28.462879 containerd[1487]: time="2025-01-30T14:18:28.462847724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.747678ms" Jan 30 14:18:28.537788 kubelet[2371]: W0130 14:18:28.537525 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.246.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.90.246.176:6443: connect: connection refused Jan 30 14:18:28.537788 kubelet[2371]: E0130 14:18:28.537637 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.90.246.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.90.246.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:18:28.573824 containerd[1487]: time="2025-01-30T14:18:28.572948930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:28.573824 containerd[1487]: time="2025-01-30T14:18:28.573739401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:28.573824 containerd[1487]: time="2025-01-30T14:18:28.573756361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.575044267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.575096107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.574603872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.574659591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.574675551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.574749710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.575151 containerd[1487]: time="2025-01-30T14:18:28.574749470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.575943 containerd[1487]: time="2025-01-30T14:18:28.575635741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.575943 containerd[1487]: time="2025-01-30T14:18:28.575761139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:28.597427 systemd[1]: Started cri-containerd-89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a.scope - libcontainer container 89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a. Jan 30 14:18:28.612395 systemd[1]: Started cri-containerd-1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433.scope - libcontainer container 1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433. Jan 30 14:18:28.618337 systemd[1]: Started cri-containerd-04137a962cffcf8929f2110e38d22dc2dce203f8e315a3cf3aaa6793b154cda4.scope - libcontainer container 04137a962cffcf8929f2110e38d22dc2dce203f8e315a3cf3aaa6793b154cda4. Jan 30 14:18:28.650451 containerd[1487]: time="2025-01-30T14:18:28.650318691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-dd601a010b,Uid:45c20d51d131dc0c5dc121184a74fcbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a\"" Jan 30 14:18:28.658548 containerd[1487]: time="2025-01-30T14:18:28.658191205Z" level=info msg="CreateContainer within sandbox \"89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:18:28.673235 containerd[1487]: time="2025-01-30T14:18:28.673160723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-dd601a010b,Uid:3ac58fa984cac7510ced29453bbd532a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433\"" Jan 30 14:18:28.676688 containerd[1487]: time="2025-01-30T14:18:28.676565166Z" level=info msg="CreateContainer within sandbox \"1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:18:28.680399 containerd[1487]: time="2025-01-30T14:18:28.680362965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-dd601a010b,Uid:b1b9b17d3e887d78b0d7ee8426772742,Namespace:kube-system,Attempt:0,} returns sandbox id \"04137a962cffcf8929f2110e38d22dc2dce203f8e315a3cf3aaa6793b154cda4\"" Jan 30 14:18:28.682135 containerd[1487]: time="2025-01-30T14:18:28.682089706Z" level=info msg="CreateContainer within sandbox \"89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0\"" Jan 30 14:18:28.682655 containerd[1487]: time="2025-01-30T14:18:28.682629660Z" level=info msg="StartContainer for \"fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0\"" Jan 30 14:18:28.684795 containerd[1487]: time="2025-01-30T14:18:28.684686638Z" level=info msg="CreateContainer within sandbox \"04137a962cffcf8929f2110e38d22dc2dce203f8e315a3cf3aaa6793b154cda4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:18:28.697900 containerd[1487]: time="2025-01-30T14:18:28.697854975Z" level=info msg="CreateContainer within sandbox \"1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5\"" Jan 30 14:18:28.699489 containerd[1487]: time="2025-01-30T14:18:28.699459798Z" level=info msg="StartContainer for \"8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5\"" Jan 30 14:18:28.702540 containerd[1487]: time="2025-01-30T14:18:28.702504405Z" level=info msg="CreateContainer within sandbox \"04137a962cffcf8929f2110e38d22dc2dce203f8e315a3cf3aaa6793b154cda4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9479dbec321ad0b5535452d375586fae5c2fb50d5d6ee77f3a853567e3974b3b\"" Jan 30 14:18:28.704451 containerd[1487]: time="2025-01-30T14:18:28.704384904Z" level=info msg="StartContainer for \"9479dbec321ad0b5535452d375586fae5c2fb50d5d6ee77f3a853567e3974b3b\"" Jan 30 14:18:28.717581 systemd[1]: Started cri-containerd-fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0.scope - libcontainer container fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0. Jan 30 14:18:28.741502 systemd[1]: Started cri-containerd-8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5.scope - libcontainer container 8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5. Jan 30 14:18:28.754321 systemd[1]: Started cri-containerd-9479dbec321ad0b5535452d375586fae5c2fb50d5d6ee77f3a853567e3974b3b.scope - libcontainer container 9479dbec321ad0b5535452d375586fae5c2fb50d5d6ee77f3a853567e3974b3b. Jan 30 14:18:28.771948 containerd[1487]: time="2025-01-30T14:18:28.771892932Z" level=info msg="StartContainer for \"fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0\" returns successfully" Jan 30 14:18:28.799541 containerd[1487]: time="2025-01-30T14:18:28.799502392Z" level=info msg="StartContainer for \"8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5\" returns successfully" Jan 30 14:18:28.804214 containerd[1487]: time="2025-01-30T14:18:28.803775786Z" level=info msg="StartContainer for \"9479dbec321ad0b5535452d375586fae5c2fb50d5d6ee77f3a853567e3974b3b\" returns successfully" Jan 30 14:18:28.881109 kubelet[2371]: E0130 14:18:28.880964 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.246.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-dd601a010b?timeout=10s\": dial tcp 157.90.246.176:6443: connect: connection refused" interval="1.6s" Jan 30 14:18:29.063096 kubelet[2371]: I0130 14:18:29.062836 2371 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:31.209405 kubelet[2371]: I0130 14:18:31.209108 2371 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:31.246586 kubelet[2371]: E0130 14:18:31.246481 2371 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-dd601a010b.181f7e2bba359ed2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-dd601a010b,UID:ci-4081-3-0-2-dd601a010b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-dd601a010b,},FirstTimestamp:2025-01-30 14:18:27.461070546 +0000 UTC m=+1.276089471,LastTimestamp:2025-01-30 14:18:27.461070546 +0000 UTC m=+1.276089471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-dd601a010b,}" Jan 30 14:18:31.291052 kubelet[2371]: E0130 14:18:31.290999 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 30 14:18:31.304337 kubelet[2371]: E0130 14:18:31.304226 2371 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-dd601a010b.181f7e2bbaf18c99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-dd601a010b,UID:ci-4081-3-0-2-dd601a010b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-dd601a010b,},FirstTimestamp:2025-01-30 14:18:27.473386649 +0000 UTC m=+1.288405614,LastTimestamp:2025-01-30 14:18:27.473386649 +0000 UTC m=+1.288405614,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-dd601a010b,}" Jan 30 14:18:31.459837 kubelet[2371]: I0130 14:18:31.459094 2371 apiserver.go:52] "Watching apiserver" Jan 30 14:18:31.477714 kubelet[2371]: I0130 14:18:31.477652 2371 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:18:33.130004 systemd[1]: Reloading requested from client PID 2649 ('systemctl') (unit session-7.scope)... Jan 30 14:18:33.130024 systemd[1]: Reloading... Jan 30 14:18:33.232228 zram_generator::config[2692]: No configuration found. Jan 30 14:18:33.339856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:18:33.432048 systemd[1]: Reloading finished in 301 ms. Jan 30 14:18:33.470003 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:33.483477 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:18:33.483859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:33.484011 systemd[1]: kubelet.service: Consumed 1.689s CPU time, 120.7M memory peak, 0B memory swap peak. Jan 30 14:18:33.494665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:18:33.621544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:18:33.621663 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:18:33.665871 kubelet[2736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:18:33.665871 kubelet[2736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:18:33.665871 kubelet[2736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:18:33.666365 kubelet[2736]: I0130 14:18:33.665951 2736 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:18:33.671733 kubelet[2736]: I0130 14:18:33.671687 2736 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:18:33.671733 kubelet[2736]: I0130 14:18:33.671714 2736 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:18:33.672038 kubelet[2736]: I0130 14:18:33.672022 2736 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:18:33.673506 kubelet[2736]: I0130 14:18:33.673478 2736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:18:33.681236 kubelet[2736]: I0130 14:18:33.680660 2736 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:18:33.685566 kubelet[2736]: E0130 14:18:33.685447 2736 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:18:33.685714 kubelet[2736]: I0130 14:18:33.685697 2736 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:18:33.690073 kubelet[2736]: I0130 14:18:33.690037 2736 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:18:33.690447 kubelet[2736]: I0130 14:18:33.690431 2736 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:18:33.690687 kubelet[2736]: I0130 14:18:33.690658 2736 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:18:33.691050 kubelet[2736]: I0130 14:18:33.690834 2736 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-dd601a010b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:18:33.691226 kubelet[2736]: I0130 14:18:33.691188 2736 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:18:33.691309 kubelet[2736]: I0130 14:18:33.691299 2736 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:18:33.691676 kubelet[2736]: I0130 14:18:33.691454 2736 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:18:33.691676 kubelet[2736]: I0130 14:18:33.691578 2736 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:18:33.691676 kubelet[2736]: I0130 14:18:33.691591 2736 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:18:33.691676 kubelet[2736]: I0130 14:18:33.691614 2736 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:18:33.691676 kubelet[2736]: I0130 14:18:33.691624 2736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:18:33.694025 kubelet[2736]: I0130 14:18:33.693843 2736 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:18:33.694831 kubelet[2736]: I0130 14:18:33.694808 2736 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:18:33.696050 kubelet[2736]: I0130 14:18:33.695475 2736 server.go:1269] "Started kubelet" Jan 30 14:18:33.697981 kubelet[2736]: I0130 14:18:33.697954 2736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:18:33.706622 kubelet[2736]: I0130 14:18:33.706575 2736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:18:33.707802 kubelet[2736]: I0130 14:18:33.707775 2736 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:18:33.708930 kubelet[2736]: I0130 14:18:33.708886 2736 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:18:33.710413 kubelet[2736]: I0130 14:18:33.710350 2736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:18:33.716237 kubelet[2736]: I0130 14:18:33.715536 2736 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:18:33.716789 kubelet[2736]: I0130 14:18:33.716764 2736 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:18:33.720088 kubelet[2736]: I0130 14:18:33.718441 2736 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:18:33.722247 kubelet[2736]: E0130 14:18:33.720504 2736 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-dd601a010b\" not found" Jan 30 14:18:33.735969 kubelet[2736]: I0130 14:18:33.731089 2736 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:18:33.735969 kubelet[2736]: I0130 14:18:33.731258 2736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:18:33.742825 kubelet[2736]: I0130 14:18:33.718656 2736 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:18:33.747240 kubelet[2736]: I0130 14:18:33.746747 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:18:33.748086 kubelet[2736]: I0130 14:18:33.748048 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:18:33.748086 kubelet[2736]: I0130 14:18:33.748080 2736 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:18:33.749166 kubelet[2736]: I0130 14:18:33.748098 2736 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:18:33.749166 kubelet[2736]: E0130 14:18:33.748146 2736 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:18:33.766544 kubelet[2736]: I0130 14:18:33.766509 2736 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:18:33.790958 kubelet[2736]: E0130 14:18:33.790925 2736 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:18:33.828159 kubelet[2736]: I0130 14:18:33.828135 2736 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:18:33.828345 kubelet[2736]: I0130 14:18:33.828332 2736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:18:33.828408 kubelet[2736]: I0130 14:18:33.828400 2736 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:18:33.828618 kubelet[2736]: I0130 14:18:33.828603 2736 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:18:33.828698 kubelet[2736]: I0130 14:18:33.828674 2736 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:18:33.828742 kubelet[2736]: I0130 14:18:33.828735 2736 policy_none.go:49] "None policy: Start" Jan 30 14:18:33.829540 kubelet[2736]: I0130 14:18:33.829521 2736 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:18:33.829649 kubelet[2736]: I0130 14:18:33.829640 2736 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:18:33.829883 kubelet[2736]: I0130 14:18:33.829872 2736 state_mem.go:75] "Updated machine memory state" Jan 30 14:18:33.834238 kubelet[2736]: I0130 14:18:33.834210 2736 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:18:33.834410 kubelet[2736]: I0130 14:18:33.834391 2736 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:18:33.834457 kubelet[2736]: I0130 14:18:33.834407 2736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:18:33.835369 kubelet[2736]: I0130 14:18:33.835128 2736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:18:33.944855 kubelet[2736]: I0130 14:18:33.944233 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.944855 kubelet[2736]: I0130 14:18:33.944307 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.944855 kubelet[2736]: I0130 14:18:33.944354 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.944855 kubelet[2736]: I0130 14:18:33.944394 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.944855 kubelet[2736]: I0130 14:18:33.944432 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.946527 kubelet[2736]: I0130 14:18:33.944469 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ac58fa984cac7510ced29453bbd532a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-dd601a010b\" (UID: \"3ac58fa984cac7510ced29453bbd532a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.946527 kubelet[2736]: I0130 14:18:33.944506 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45c20d51d131dc0c5dc121184a74fcbe-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-dd601a010b\" (UID: \"45c20d51d131dc0c5dc121184a74fcbe\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.946527 kubelet[2736]: I0130 14:18:33.944542 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.946527 kubelet[2736]: I0130 14:18:33.944579 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b9b17d3e887d78b0d7ee8426772742-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-dd601a010b\" (UID: \"b1b9b17d3e887d78b0d7ee8426772742\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.947589 kubelet[2736]: I0130 14:18:33.947515 2736 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.959702 kubelet[2736]: I0130 14:18:33.959311 2736 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:33.959702 kubelet[2736]: I0130 14:18:33.959406 2736 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-2-dd601a010b" Jan 30 14:18:34.693254 kubelet[2736]: I0130 14:18:34.693152 2736 apiserver.go:52] "Watching apiserver" Jan 30 14:18:34.721070 kubelet[2736]: I0130 14:18:34.721004 2736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:18:34.841834 kubelet[2736]: I0130 14:18:34.840357 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-dd601a010b" podStartSLOduration=1.840341471 podStartE2EDuration="1.840341471s" podCreationTimestamp="2025-01-30 14:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:18:34.83940536 +0000 UTC m=+1.212673706" watchObservedRunningTime="2025-01-30 14:18:34.840341471 +0000 UTC m=+1.213609777" Jan 30 14:18:34.865567 kubelet[2736]: I0130 14:18:34.865057 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-dd601a010b" podStartSLOduration=1.8650370010000001 podStartE2EDuration="1.865037001s" podCreationTimestamp="2025-01-30 14:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:18:34.862454745 +0000 UTC m=+1.235723091" watchObservedRunningTime="2025-01-30 14:18:34.865037001 +0000 UTC m=+1.238305347" Jan 30 14:18:34.901886 kubelet[2736]: I0130 14:18:34.901827 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-dd601a010b" podStartSLOduration=1.901808059 podStartE2EDuration="1.901808059s" podCreationTimestamp="2025-01-30 14:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:18:34.887160195 +0000 UTC m=+1.260428541" watchObservedRunningTime="2025-01-30 14:18:34.901808059 +0000 UTC m=+1.275076405" Jan 30 14:18:39.181807 sudo[1912]: pam_unix(sudo:session): session closed for user root Jan 30 14:18:39.342455 sshd[1909]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:39.347504 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:18:39.348636 systemd[1]: sshd@13-157.90.246.176:22-139.178.68.195:58122.service: Deactivated successfully. Jan 30 14:18:39.351017 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:18:39.351502 systemd[1]: session-7.scope: Consumed 6.742s CPU time, 152.3M memory peak, 0B memory swap peak. Jan 30 14:18:39.353586 systemd-logind[1459]: Removed session 7. Jan 30 14:18:39.602612 kubelet[2736]: I0130 14:18:39.602395 2736 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:18:39.603791 containerd[1487]: time="2025-01-30T14:18:39.603421172Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:18:39.605335 kubelet[2736]: I0130 14:18:39.603643 2736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:18:40.258050 systemd[1]: Created slice kubepods-besteffort-pod2743de5e_2948_4401_80ac_60deed90edea.slice - libcontainer container kubepods-besteffort-pod2743de5e_2948_4401_80ac_60deed90edea.slice. Jan 30 14:18:40.286612 kubelet[2736]: I0130 14:18:40.286345 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sc7g\" (UniqueName: \"kubernetes.io/projected/2743de5e-2948-4401-80ac-60deed90edea-kube-api-access-9sc7g\") pod \"kube-proxy-gc68m\" (UID: \"2743de5e-2948-4401-80ac-60deed90edea\") " pod="kube-system/kube-proxy-gc68m" Jan 30 14:18:40.286612 kubelet[2736]: I0130 14:18:40.286422 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2743de5e-2948-4401-80ac-60deed90edea-xtables-lock\") pod \"kube-proxy-gc68m\" (UID: \"2743de5e-2948-4401-80ac-60deed90edea\") " pod="kube-system/kube-proxy-gc68m" Jan 30 14:18:40.286612 kubelet[2736]: I0130 14:18:40.286463 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2743de5e-2948-4401-80ac-60deed90edea-lib-modules\") pod \"kube-proxy-gc68m\" (UID: \"2743de5e-2948-4401-80ac-60deed90edea\") " pod="kube-system/kube-proxy-gc68m" Jan 30 14:18:40.286612 kubelet[2736]: I0130 14:18:40.286504 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2743de5e-2948-4401-80ac-60deed90edea-kube-proxy\") pod \"kube-proxy-gc68m\" (UID: \"2743de5e-2948-4401-80ac-60deed90edea\") " pod="kube-system/kube-proxy-gc68m" Jan 30 14:18:40.574458 containerd[1487]: time="2025-01-30T14:18:40.574393413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gc68m,Uid:2743de5e-2948-4401-80ac-60deed90edea,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:40.612135 containerd[1487]: time="2025-01-30T14:18:40.611978511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:40.612135 containerd[1487]: time="2025-01-30T14:18:40.612036791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:40.612135 containerd[1487]: time="2025-01-30T14:18:40.612062951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:40.612625 containerd[1487]: time="2025-01-30T14:18:40.612276469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:40.636462 systemd[1]: Started cri-containerd-1e5ae2a8c83a1bc9424ca1bf688d24320b4c6583e9647c6bd4829d78cfaf2450.scope - libcontainer container 1e5ae2a8c83a1bc9424ca1bf688d24320b4c6583e9647c6bd4829d78cfaf2450. Jan 30 14:18:40.666763 containerd[1487]: time="2025-01-30T14:18:40.666720392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gc68m,Uid:2743de5e-2948-4401-80ac-60deed90edea,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e5ae2a8c83a1bc9424ca1bf688d24320b4c6583e9647c6bd4829d78cfaf2450\"" Jan 30 14:18:40.673398 containerd[1487]: time="2025-01-30T14:18:40.673349858Z" level=info msg="CreateContainer within sandbox \"1e5ae2a8c83a1bc9424ca1bf688d24320b4c6583e9647c6bd4829d78cfaf2450\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:18:40.702213 containerd[1487]: time="2025-01-30T14:18:40.702144667Z" level=info msg="CreateContainer within sandbox \"1e5ae2a8c83a1bc9424ca1bf688d24320b4c6583e9647c6bd4829d78cfaf2450\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"896c8bbcadbdc74e94a24afe8ae4f5648a544af489caf0ceb442f01efe625b82\"" Jan 30 14:18:40.704580 containerd[1487]: time="2025-01-30T14:18:40.703002100Z" level=info msg="StartContainer for \"896c8bbcadbdc74e94a24afe8ae4f5648a544af489caf0ceb442f01efe625b82\"" Jan 30 14:18:40.736474 systemd[1]: Started cri-containerd-896c8bbcadbdc74e94a24afe8ae4f5648a544af489caf0ceb442f01efe625b82.scope - libcontainer container 896c8bbcadbdc74e94a24afe8ae4f5648a544af489caf0ceb442f01efe625b82. Jan 30 14:18:40.787780 containerd[1487]: time="2025-01-30T14:18:40.787722180Z" level=info msg="StartContainer for \"896c8bbcadbdc74e94a24afe8ae4f5648a544af489caf0ceb442f01efe625b82\" returns successfully" Jan 30 14:18:40.800445 systemd[1]: Created slice kubepods-besteffort-pod367a188a_2444_4986_8039_de042ddbdfdb.slice - libcontainer container kubepods-besteffort-pod367a188a_2444_4986_8039_de042ddbdfdb.slice. Jan 30 14:18:40.852113 kubelet[2736]: I0130 14:18:40.851969 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gc68m" podStartSLOduration=0.851951744 podStartE2EDuration="851.951744ms" podCreationTimestamp="2025-01-30 14:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:18:40.851912184 +0000 UTC m=+7.225180610" watchObservedRunningTime="2025-01-30 14:18:40.851951744 +0000 UTC m=+7.225220090" Jan 30 14:18:40.892981 kubelet[2736]: I0130 14:18:40.892887 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/367a188a-2444-4986-8039-de042ddbdfdb-var-lib-calico\") pod \"tigera-operator-76c4976dd7-s8m99\" (UID: \"367a188a-2444-4986-8039-de042ddbdfdb\") " pod="tigera-operator/tigera-operator-76c4976dd7-s8m99" Jan 30 14:18:40.892981 kubelet[2736]: I0130 14:18:40.892936 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwhn\" (UniqueName: \"kubernetes.io/projected/367a188a-2444-4986-8039-de042ddbdfdb-kube-api-access-kjwhn\") pod \"tigera-operator-76c4976dd7-s8m99\" (UID: \"367a188a-2444-4986-8039-de042ddbdfdb\") " pod="tigera-operator/tigera-operator-76c4976dd7-s8m99" Jan 30 14:18:41.098513 systemd[1]: Started sshd@15-157.90.246.176:22-162.240.226.19:39136.service - OpenSSH per-connection server daemon (162.240.226.19:39136). Jan 30 14:18:41.106892 containerd[1487]: time="2025-01-30T14:18:41.106611798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-s8m99,Uid:367a188a-2444-4986-8039-de042ddbdfdb,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:18:41.141482 containerd[1487]: time="2025-01-30T14:18:41.138830225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:41.141482 containerd[1487]: time="2025-01-30T14:18:41.138899144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:41.141482 containerd[1487]: time="2025-01-30T14:18:41.138930224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:41.141482 containerd[1487]: time="2025-01-30T14:18:41.139021143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:41.161468 systemd[1]: Started cri-containerd-e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed.scope - libcontainer container e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed. Jan 30 14:18:41.203666 containerd[1487]: time="2025-01-30T14:18:41.203596477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-s8m99,Uid:367a188a-2444-4986-8039-de042ddbdfdb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed\"" Jan 30 14:18:41.205964 containerd[1487]: time="2025-01-30T14:18:41.205927819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:18:42.158774 sshd[2931]: Received disconnect from 162.240.226.19 port 39136:11: Bye Bye [preauth] Jan 30 14:18:42.158774 sshd[2931]: Disconnected from authenticating user root 162.240.226.19 port 39136 [preauth] Jan 30 14:18:42.162145 systemd[1]: sshd@15-157.90.246.176:22-162.240.226.19:39136.service: Deactivated successfully. Jan 30 14:18:43.221296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055379341.mount: Deactivated successfully. Jan 30 14:18:43.557984 containerd[1487]: time="2025-01-30T14:18:43.557932914Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:43.559296 containerd[1487]: time="2025-01-30T14:18:43.559257424Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:18:43.559630 containerd[1487]: time="2025-01-30T14:18:43.559581542Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:43.562211 containerd[1487]: time="2025-01-30T14:18:43.562168602Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:43.563092 containerd[1487]: time="2025-01-30T14:18:43.562969716Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.356999418s" Jan 30 14:18:43.563092 containerd[1487]: time="2025-01-30T14:18:43.563005356Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:18:43.566027 containerd[1487]: time="2025-01-30T14:18:43.565981734Z" level=info msg="CreateContainer within sandbox \"e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:18:43.588471 containerd[1487]: time="2025-01-30T14:18:43.588402806Z" level=info msg="CreateContainer within sandbox \"e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022\"" Jan 30 14:18:43.591494 containerd[1487]: time="2025-01-30T14:18:43.589348359Z" level=info msg="StartContainer for \"416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022\"" Jan 30 14:18:43.624435 systemd[1]: Started cri-containerd-416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022.scope - libcontainer container 416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022. Jan 30 14:18:43.652053 containerd[1487]: time="2025-01-30T14:18:43.651999290Z" level=info msg="StartContainer for \"416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022\" returns successfully" Jan 30 14:18:44.618695 systemd[1]: Started sshd@16-157.90.246.176:22-178.128.232.125:51458.service - OpenSSH per-connection server daemon (178.128.232.125:51458). Jan 30 14:18:45.265639 sshd[3114]: Invalid user admin from 178.128.232.125 port 51458 Jan 30 14:18:45.383682 sshd[3114]: Received disconnect from 178.128.232.125 port 51458:11: Bye Bye [preauth] Jan 30 14:18:45.383682 sshd[3114]: Disconnected from invalid user admin 178.128.232.125 port 51458 [preauth] Jan 30 14:18:45.385804 systemd[1]: sshd@16-157.90.246.176:22-178.128.232.125:51458.service: Deactivated successfully. Jan 30 14:18:45.930977 kubelet[2736]: I0130 14:18:45.930808 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-s8m99" podStartSLOduration=3.571872203 podStartE2EDuration="5.930783967s" podCreationTimestamp="2025-01-30 14:18:40 +0000 UTC" firstStartedPulling="2025-01-30 14:18:41.205322863 +0000 UTC m=+7.578591209" lastFinishedPulling="2025-01-30 14:18:43.564234667 +0000 UTC m=+9.937502973" observedRunningTime="2025-01-30 14:18:43.845054645 +0000 UTC m=+10.218322991" watchObservedRunningTime="2025-01-30 14:18:45.930783967 +0000 UTC m=+12.304052353" Jan 30 14:18:47.711937 systemd[1]: Created slice kubepods-besteffort-pod5907c254_6732_4e9e_b901_1894ff2e0387.slice - libcontainer container kubepods-besteffort-pod5907c254_6732_4e9e_b901_1894ff2e0387.slice. Jan 30 14:18:47.739354 kubelet[2736]: I0130 14:18:47.739237 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5907c254-6732-4e9e-b901-1894ff2e0387-tigera-ca-bundle\") pod \"calico-typha-869f5c767d-q9wmd\" (UID: \"5907c254-6732-4e9e-b901-1894ff2e0387\") " pod="calico-system/calico-typha-869f5c767d-q9wmd" Jan 30 14:18:47.739354 kubelet[2736]: I0130 14:18:47.739293 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5907c254-6732-4e9e-b901-1894ff2e0387-typha-certs\") pod \"calico-typha-869f5c767d-q9wmd\" (UID: \"5907c254-6732-4e9e-b901-1894ff2e0387\") " pod="calico-system/calico-typha-869f5c767d-q9wmd" Jan 30 14:18:47.739354 kubelet[2736]: I0130 14:18:47.739317 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4l67\" (UniqueName: \"kubernetes.io/projected/5907c254-6732-4e9e-b901-1894ff2e0387-kube-api-access-x4l67\") pod \"calico-typha-869f5c767d-q9wmd\" (UID: \"5907c254-6732-4e9e-b901-1894ff2e0387\") " pod="calico-system/calico-typha-869f5c767d-q9wmd" Jan 30 14:18:47.865944 systemd[1]: Created slice kubepods-besteffort-podd4504b36_cb92_4446_bb08_675e2ad3c4ac.slice - libcontainer container kubepods-besteffort-podd4504b36_cb92_4446_bb08_675e2ad3c4ac.slice. Jan 30 14:18:47.941904 kubelet[2736]: I0130 14:18:47.941411 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-xtables-lock\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.941904 kubelet[2736]: I0130 14:18:47.941474 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d4504b36-cb92-4446-bb08-675e2ad3c4ac-node-certs\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.941904 kubelet[2736]: I0130 14:18:47.941505 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-cni-bin-dir\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.941904 kubelet[2736]: I0130 14:18:47.941531 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-var-run-calico\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.941904 kubelet[2736]: I0130 14:18:47.941567 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-var-lib-calico\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942370 kubelet[2736]: I0130 14:18:47.941594 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-cni-net-dir\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942370 kubelet[2736]: I0130 14:18:47.941617 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvdd\" (UniqueName: \"kubernetes.io/projected/d4504b36-cb92-4446-bb08-675e2ad3c4ac-kube-api-access-cnvdd\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942370 kubelet[2736]: I0130 14:18:47.941642 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-lib-modules\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942370 kubelet[2736]: I0130 14:18:47.941670 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-flexvol-driver-host\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942370 kubelet[2736]: I0130 14:18:47.941697 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-policysync\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942552 kubelet[2736]: I0130 14:18:47.941734 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4504b36-cb92-4446-bb08-675e2ad3c4ac-tigera-ca-bundle\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:47.942552 kubelet[2736]: I0130 14:18:47.941757 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d4504b36-cb92-4446-bb08-675e2ad3c4ac-cni-log-dir\") pod \"calico-node-6crts\" (UID: \"d4504b36-cb92-4446-bb08-675e2ad3c4ac\") " pod="calico-system/calico-node-6crts" Jan 30 14:18:48.018906 kubelet[2736]: E0130 14:18:48.018557 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:48.023123 containerd[1487]: time="2025-01-30T14:18:48.022176438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-869f5c767d-q9wmd,Uid:5907c254-6732-4e9e-b901-1894ff2e0387,Namespace:calico-system,Attempt:0,}" Jan 30 14:18:48.049461 kubelet[2736]: E0130 14:18:48.049433 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.049827 kubelet[2736]: W0130 14:18:48.049592 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.049827 kubelet[2736]: E0130 14:18:48.049621 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.051641 kubelet[2736]: E0130 14:18:48.051594 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.051866 kubelet[2736]: W0130 14:18:48.051616 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.051866 kubelet[2736]: E0130 14:18:48.051819 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.071511 kubelet[2736]: E0130 14:18:48.071481 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.072285 kubelet[2736]: W0130 14:18:48.072209 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.072285 kubelet[2736]: E0130 14:18:48.072242 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.072427 containerd[1487]: time="2025-01-30T14:18:48.071924106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:48.072427 containerd[1487]: time="2025-01-30T14:18:48.071991985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:48.072427 containerd[1487]: time="2025-01-30T14:18:48.072007545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:48.072427 containerd[1487]: time="2025-01-30T14:18:48.072129664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:48.096455 systemd[1]: Started cri-containerd-c45452900ff14cb8c6eeece790de384a019aef571c6a7fcbd4fd46efaf5124d2.scope - libcontainer container c45452900ff14cb8c6eeece790de384a019aef571c6a7fcbd4fd46efaf5124d2. Jan 30 14:18:48.112322 kubelet[2736]: E0130 14:18:48.112294 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.112552 kubelet[2736]: W0130 14:18:48.112483 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.112552 kubelet[2736]: E0130 14:18:48.112508 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.114527 kubelet[2736]: E0130 14:18:48.114393 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.114527 kubelet[2736]: W0130 14:18:48.114412 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.114527 kubelet[2736]: E0130 14:18:48.114436 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.115000 kubelet[2736]: E0130 14:18:48.114874 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.115000 kubelet[2736]: W0130 14:18:48.114887 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.115000 kubelet[2736]: E0130 14:18:48.114899 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.115362 kubelet[2736]: E0130 14:18:48.115144 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.115362 kubelet[2736]: W0130 14:18:48.115153 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.115362 kubelet[2736]: E0130 14:18:48.115163 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.115590 kubelet[2736]: E0130 14:18:48.115488 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.115590 kubelet[2736]: W0130 14:18:48.115501 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.115590 kubelet[2736]: E0130 14:18:48.115511 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.115828 kubelet[2736]: E0130 14:18:48.115805 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.115947 kubelet[2736]: W0130 14:18:48.115887 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.115947 kubelet[2736]: E0130 14:18:48.115903 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.116274 kubelet[2736]: E0130 14:18:48.116206 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.116274 kubelet[2736]: W0130 14:18:48.116218 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.116274 kubelet[2736]: E0130 14:18:48.116228 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.116874 kubelet[2736]: E0130 14:18:48.116652 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.116874 kubelet[2736]: W0130 14:18:48.116665 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.116874 kubelet[2736]: E0130 14:18:48.116676 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.118366 kubelet[2736]: E0130 14:18:48.118242 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.118366 kubelet[2736]: W0130 14:18:48.118257 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.118366 kubelet[2736]: E0130 14:18:48.118269 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.118614 kubelet[2736]: E0130 14:18:48.118559 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.118614 kubelet[2736]: W0130 14:18:48.118570 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.118614 kubelet[2736]: E0130 14:18:48.118581 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.119098 kubelet[2736]: E0130 14:18:48.119005 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.119098 kubelet[2736]: W0130 14:18:48.119018 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.119098 kubelet[2736]: E0130 14:18:48.119029 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.119381 kubelet[2736]: E0130 14:18:48.119371 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.119545 kubelet[2736]: W0130 14:18:48.119434 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.119545 kubelet[2736]: E0130 14:18:48.119450 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.119724 kubelet[2736]: E0130 14:18:48.119686 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.119877 kubelet[2736]: W0130 14:18:48.119789 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.119985 kubelet[2736]: E0130 14:18:48.119954 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.120694 kubelet[2736]: E0130 14:18:48.120506 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.120694 kubelet[2736]: W0130 14:18:48.120519 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.120694 kubelet[2736]: E0130 14:18:48.120530 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.121606 kubelet[2736]: E0130 14:18:48.121486 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.121606 kubelet[2736]: W0130 14:18:48.121501 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.121606 kubelet[2736]: E0130 14:18:48.121511 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.122124 kubelet[2736]: E0130 14:18:48.122004 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.122124 kubelet[2736]: W0130 14:18:48.122018 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.122124 kubelet[2736]: E0130 14:18:48.122031 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.122953 kubelet[2736]: E0130 14:18:48.122768 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.122953 kubelet[2736]: W0130 14:18:48.122787 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.122953 kubelet[2736]: E0130 14:18:48.122799 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.124008 kubelet[2736]: E0130 14:18:48.123658 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.124008 kubelet[2736]: W0130 14:18:48.123675 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.124008 kubelet[2736]: E0130 14:18:48.123686 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.124725 kubelet[2736]: E0130 14:18:48.124528 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.124725 kubelet[2736]: W0130 14:18:48.124541 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.124725 kubelet[2736]: E0130 14:18:48.124578 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.125611 kubelet[2736]: E0130 14:18:48.125419 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.125611 kubelet[2736]: W0130 14:18:48.125433 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.125611 kubelet[2736]: E0130 14:18:48.125445 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.144045 kubelet[2736]: E0130 14:18:48.144021 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.144326 kubelet[2736]: W0130 14:18:48.144192 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.144326 kubelet[2736]: E0130 14:18:48.144236 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.144326 kubelet[2736]: I0130 14:18:48.144275 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36f3a4c2-f842-4550-b82d-bc5e5af52ab2-kubelet-dir\") pod \"csi-node-driver-7cqhl\" (UID: \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\") " pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:48.144740 kubelet[2736]: E0130 14:18:48.144611 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.144740 kubelet[2736]: W0130 14:18:48.144624 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.144740 kubelet[2736]: E0130 14:18:48.144643 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.144740 kubelet[2736]: I0130 14:18:48.144663 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36f3a4c2-f842-4550-b82d-bc5e5af52ab2-registration-dir\") pod \"csi-node-driver-7cqhl\" (UID: \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\") " pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:48.145406 kubelet[2736]: E0130 14:18:48.145313 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.145406 kubelet[2736]: W0130 14:18:48.145327 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.145406 kubelet[2736]: E0130 14:18:48.145347 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.145406 kubelet[2736]: I0130 14:18:48.145377 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36f3a4c2-f842-4550-b82d-bc5e5af52ab2-varrun\") pod \"csi-node-driver-7cqhl\" (UID: \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\") " pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:48.146125 kubelet[2736]: E0130 14:18:48.146040 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.146125 kubelet[2736]: W0130 14:18:48.146054 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.146125 kubelet[2736]: E0130 14:18:48.146083 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.147305 kubelet[2736]: E0130 14:18:48.146612 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.147305 kubelet[2736]: W0130 14:18:48.146626 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.147502 kubelet[2736]: E0130 14:18:48.147429 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.147807 kubelet[2736]: E0130 14:18:48.147692 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.147807 kubelet[2736]: W0130 14:18:48.147760 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.148001 kubelet[2736]: E0130 14:18:48.147844 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.148352 kubelet[2736]: E0130 14:18:48.148192 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.148352 kubelet[2736]: W0130 14:18:48.148234 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.148352 kubelet[2736]: E0130 14:18:48.148314 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.148352 kubelet[2736]: I0130 14:18:48.148336 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfk9r\" (UniqueName: \"kubernetes.io/projected/36f3a4c2-f842-4550-b82d-bc5e5af52ab2-kube-api-access-wfk9r\") pod \"csi-node-driver-7cqhl\" (UID: \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\") " pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:48.149070 kubelet[2736]: E0130 14:18:48.148969 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.149070 kubelet[2736]: W0130 14:18:48.148988 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.149302 kubelet[2736]: E0130 14:18:48.149173 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.149414 kubelet[2736]: E0130 14:18:48.149403 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.149647 kubelet[2736]: W0130 14:18:48.149465 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.149647 kubelet[2736]: E0130 14:18:48.149480 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.150136 kubelet[2736]: E0130 14:18:48.150048 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.150136 kubelet[2736]: W0130 14:18:48.150060 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.150136 kubelet[2736]: E0130 14:18:48.150073 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.150487 kubelet[2736]: E0130 14:18:48.150395 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.150487 kubelet[2736]: W0130 14:18:48.150407 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.150487 kubelet[2736]: E0130 14:18:48.150422 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.150487 kubelet[2736]: I0130 14:18:48.150441 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36f3a4c2-f842-4550-b82d-bc5e5af52ab2-socket-dir\") pod \"csi-node-driver-7cqhl\" (UID: \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\") " pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:48.150943 kubelet[2736]: E0130 14:18:48.150842 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.150943 kubelet[2736]: W0130 14:18:48.150855 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.150943 kubelet[2736]: E0130 14:18:48.150865 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.151234 kubelet[2736]: E0130 14:18:48.151149 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.151234 kubelet[2736]: W0130 14:18:48.151159 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.151234 kubelet[2736]: E0130 14:18:48.151170 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.151558 kubelet[2736]: E0130 14:18:48.151484 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.151558 kubelet[2736]: W0130 14:18:48.151495 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.151558 kubelet[2736]: E0130 14:18:48.151504 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.151880 kubelet[2736]: E0130 14:18:48.151815 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.151880 kubelet[2736]: W0130 14:18:48.151825 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.151880 kubelet[2736]: E0130 14:18:48.151835 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.171148 containerd[1487]: time="2025-01-30T14:18:48.171042324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6crts,Uid:d4504b36-cb92-4446-bb08-675e2ad3c4ac,Namespace:calico-system,Attempt:0,}" Jan 30 14:18:48.190720 containerd[1487]: time="2025-01-30T14:18:48.190530914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-869f5c767d-q9wmd,Uid:5907c254-6732-4e9e-b901-1894ff2e0387,Namespace:calico-system,Attempt:0,} returns sandbox id \"c45452900ff14cb8c6eeece790de384a019aef571c6a7fcbd4fd46efaf5124d2\"" Jan 30 14:18:48.194613 containerd[1487]: time="2025-01-30T14:18:48.194570287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:18:48.209455 containerd[1487]: time="2025-01-30T14:18:48.209034191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:18:48.209455 containerd[1487]: time="2025-01-30T14:18:48.209162910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:18:48.210549 containerd[1487]: time="2025-01-30T14:18:48.210347862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:48.214891 containerd[1487]: time="2025-01-30T14:18:48.211325255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:18:48.231383 systemd[1]: Started cri-containerd-3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f.scope - libcontainer container 3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f. Jan 30 14:18:48.252917 kubelet[2736]: E0130 14:18:48.252887 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.253090 kubelet[2736]: W0130 14:18:48.253075 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.253228 kubelet[2736]: E0130 14:18:48.253166 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.253710 kubelet[2736]: E0130 14:18:48.253596 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.253710 kubelet[2736]: W0130 14:18:48.253609 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.253710 kubelet[2736]: E0130 14:18:48.253627 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.254427 kubelet[2736]: E0130 14:18:48.254243 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.254427 kubelet[2736]: W0130 14:18:48.254262 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.254427 kubelet[2736]: E0130 14:18:48.254290 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.254767 kubelet[2736]: E0130 14:18:48.254637 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.254767 kubelet[2736]: W0130 14:18:48.254649 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.254998 kubelet[2736]: E0130 14:18:48.254880 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.254998 kubelet[2736]: W0130 14:18:48.254889 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.254998 kubelet[2736]: E0130 14:18:48.254901 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.255254 kubelet[2736]: E0130 14:18:48.255128 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.255254 kubelet[2736]: W0130 14:18:48.255140 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.255254 kubelet[2736]: E0130 14:18:48.255149 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.255676 kubelet[2736]: E0130 14:18:48.255427 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.255676 kubelet[2736]: W0130 14:18:48.255438 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.255676 kubelet[2736]: E0130 14:18:48.255447 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.256241 kubelet[2736]: E0130 14:18:48.256020 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.256241 kubelet[2736]: E0130 14:18:48.256163 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.256241 kubelet[2736]: W0130 14:18:48.256172 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.256241 kubelet[2736]: E0130 14:18:48.256187 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.256744 kubelet[2736]: E0130 14:18:48.256583 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.256744 kubelet[2736]: W0130 14:18:48.256594 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.256744 kubelet[2736]: E0130 14:18:48.256613 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.257052 kubelet[2736]: E0130 14:18:48.256945 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.257052 kubelet[2736]: W0130 14:18:48.256956 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.257052 kubelet[2736]: E0130 14:18:48.256973 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.257524 kubelet[2736]: E0130 14:18:48.257372 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.257524 kubelet[2736]: W0130 14:18:48.257383 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.257524 kubelet[2736]: E0130 14:18:48.257442 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.257991 kubelet[2736]: E0130 14:18:48.257801 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.257991 kubelet[2736]: W0130 14:18:48.257812 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.258223 kubelet[2736]: E0130 14:18:48.258062 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.258561 kubelet[2736]: E0130 14:18:48.258398 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.258561 kubelet[2736]: W0130 14:18:48.258500 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.258922 kubelet[2736]: E0130 14:18:48.258829 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.259359 kubelet[2736]: E0130 14:18:48.259344 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.259554 kubelet[2736]: W0130 14:18:48.259453 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.259802 kubelet[2736]: E0130 14:18:48.259759 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.260102 kubelet[2736]: E0130 14:18:48.260013 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.260102 kubelet[2736]: W0130 14:18:48.260026 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.260488 kubelet[2736]: E0130 14:18:48.260324 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.260488 kubelet[2736]: W0130 14:18:48.260338 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.260574 containerd[1487]: time="2025-01-30T14:18:48.260255409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6crts,Uid:d4504b36-cb92-4446-bb08-675e2ad3c4ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\"" Jan 30 14:18:48.261033 kubelet[2736]: E0130 14:18:48.260660 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.261033 kubelet[2736]: E0130 14:18:48.260682 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.261285 kubelet[2736]: E0130 14:18:48.261267 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.261456 kubelet[2736]: W0130 14:18:48.261347 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.261961 kubelet[2736]: E0130 14:18:48.261761 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.261961 kubelet[2736]: E0130 14:18:48.261859 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.261961 kubelet[2736]: W0130 14:18:48.261865 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.261961 kubelet[2736]: E0130 14:18:48.261943 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.262906 kubelet[2736]: E0130 14:18:48.262770 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.262906 kubelet[2736]: W0130 14:18:48.262786 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.263155 kubelet[2736]: E0130 14:18:48.263018 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.263338 kubelet[2736]: E0130 14:18:48.263323 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.263407 kubelet[2736]: W0130 14:18:48.263395 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.264815 kubelet[2736]: E0130 14:18:48.264637 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.264815 kubelet[2736]: W0130 14:18:48.264654 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.265224 kubelet[2736]: E0130 14:18:48.265153 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.265498 kubelet[2736]: E0130 14:18:48.265396 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.266046 kubelet[2736]: E0130 14:18:48.265890 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.266046 kubelet[2736]: W0130 14:18:48.265903 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.267513 kubelet[2736]: E0130 14:18:48.267191 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.267513 kubelet[2736]: E0130 14:18:48.267412 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.267513 kubelet[2736]: W0130 14:18:48.267422 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.267513 kubelet[2736]: E0130 14:18:48.267453 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.267736 kubelet[2736]: E0130 14:18:48.267714 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.267736 kubelet[2736]: W0130 14:18:48.267730 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.267910 kubelet[2736]: E0130 14:18:48.267860 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.267974 kubelet[2736]: E0130 14:18:48.267955 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.267974 kubelet[2736]: W0130 14:18:48.267963 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.267974 kubelet[2736]: E0130 14:18:48.267972 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:48.281048 kubelet[2736]: E0130 14:18:48.280947 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:48.281224 kubelet[2736]: W0130 14:18:48.281182 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:48.281315 kubelet[2736]: E0130 14:18:48.281300 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:49.577542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184625037.mount: Deactivated successfully. Jan 30 14:18:49.749895 kubelet[2736]: E0130 14:18:49.748737 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:50.464318 containerd[1487]: time="2025-01-30T14:18:50.463418269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:50.464855 containerd[1487]: time="2025-01-30T14:18:50.464719180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 14:18:50.465903 containerd[1487]: time="2025-01-30T14:18:50.465833813Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:50.468414 containerd[1487]: time="2025-01-30T14:18:50.468363197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:50.469364 containerd[1487]: time="2025-01-30T14:18:50.469329991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.274712344s" Jan 30 14:18:50.469364 containerd[1487]: time="2025-01-30T14:18:50.469364591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:18:50.470944 containerd[1487]: time="2025-01-30T14:18:50.470836141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:18:50.486344 containerd[1487]: time="2025-01-30T14:18:50.486302762Z" level=info msg="CreateContainer within sandbox \"c45452900ff14cb8c6eeece790de384a019aef571c6a7fcbd4fd46efaf5124d2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:18:50.502074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584939721.mount: Deactivated successfully. Jan 30 14:18:50.507622 containerd[1487]: time="2025-01-30T14:18:50.507456507Z" level=info msg="CreateContainer within sandbox \"c45452900ff14cb8c6eeece790de384a019aef571c6a7fcbd4fd46efaf5124d2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5466e58d1211bab481a738f599b33974b4405e618292631d98069617afd7d5c6\"" Jan 30 14:18:50.510277 containerd[1487]: time="2025-01-30T14:18:50.508291062Z" level=info msg="StartContainer for \"5466e58d1211bab481a738f599b33974b4405e618292631d98069617afd7d5c6\"" Jan 30 14:18:50.541495 systemd[1]: Started cri-containerd-5466e58d1211bab481a738f599b33974b4405e618292631d98069617afd7d5c6.scope - libcontainer container 5466e58d1211bab481a738f599b33974b4405e618292631d98069617afd7d5c6. Jan 30 14:18:50.581332 containerd[1487]: time="2025-01-30T14:18:50.581187517Z" level=info msg="StartContainer for \"5466e58d1211bab481a738f599b33974b4405e618292631d98069617afd7d5c6\" returns successfully" Jan 30 14:18:50.946967 kubelet[2736]: E0130 14:18:50.946872 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.946967 kubelet[2736]: W0130 14:18:50.946951 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.947041 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.947538 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.949068 kubelet[2736]: W0130 14:18:50.947549 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.947560 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.947805 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.949068 kubelet[2736]: W0130 14:18:50.947814 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.947824 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.948027 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.949068 kubelet[2736]: W0130 14:18:50.948037 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.949068 kubelet[2736]: E0130 14:18:50.948046 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948251 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.950879 kubelet[2736]: W0130 14:18:50.948259 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948270 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948468 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.950879 kubelet[2736]: W0130 14:18:50.948476 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948486 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948657 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.950879 kubelet[2736]: W0130 14:18:50.948698 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948709 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.950879 kubelet[2736]: E0130 14:18:50.948890 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.951789 kubelet[2736]: W0130 14:18:50.948899 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.948907 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.949118 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.951789 kubelet[2736]: W0130 14:18:50.949127 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.949135 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.949330 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.951789 kubelet[2736]: W0130 14:18:50.949339 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.949347 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.951789 kubelet[2736]: E0130 14:18:50.949517 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.951789 kubelet[2736]: W0130 14:18:50.949546 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.949556 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.949736 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.952047 kubelet[2736]: W0130 14:18:50.949745 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.949753 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.949971 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.952047 kubelet[2736]: W0130 14:18:50.949980 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.949989 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.950186 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.952047 kubelet[2736]: W0130 14:18:50.950214 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.952047 kubelet[2736]: E0130 14:18:50.950236 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.952404 kubelet[2736]: E0130 14:18:50.950396 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.952404 kubelet[2736]: W0130 14:18:50.950406 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.952404 kubelet[2736]: E0130 14:18:50.950415 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.980561 kubelet[2736]: E0130 14:18:50.978794 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.980561 kubelet[2736]: W0130 14:18:50.978836 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.980561 kubelet[2736]: E0130 14:18:50.978858 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.981053 kubelet[2736]: E0130 14:18:50.981014 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.981053 kubelet[2736]: W0130 14:18:50.981042 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.981156 kubelet[2736]: E0130 14:18:50.981066 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.983573 kubelet[2736]: E0130 14:18:50.983533 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.983573 kubelet[2736]: W0130 14:18:50.983565 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.984140 kubelet[2736]: E0130 14:18:50.984102 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.984973 kubelet[2736]: E0130 14:18:50.984937 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.984973 kubelet[2736]: W0130 14:18:50.984963 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.985323 kubelet[2736]: E0130 14:18:50.985297 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.986110 kubelet[2736]: E0130 14:18:50.986066 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.986110 kubelet[2736]: W0130 14:18:50.986087 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.986558 kubelet[2736]: E0130 14:18:50.986532 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.986848 kubelet[2736]: E0130 14:18:50.986826 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.986901 kubelet[2736]: W0130 14:18:50.986849 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.987090 kubelet[2736]: E0130 14:18:50.986957 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.987294 kubelet[2736]: E0130 14:18:50.987279 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.987294 kubelet[2736]: W0130 14:18:50.987293 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.987757 kubelet[2736]: E0130 14:18:50.987721 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.988868 kubelet[2736]: E0130 14:18:50.988846 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.988868 kubelet[2736]: W0130 14:18:50.988865 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.988962 kubelet[2736]: E0130 14:18:50.988883 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.989475 kubelet[2736]: E0130 14:18:50.989428 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.989475 kubelet[2736]: W0130 14:18:50.989451 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.989643 kubelet[2736]: E0130 14:18:50.989552 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.989860 kubelet[2736]: E0130 14:18:50.989840 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.989860 kubelet[2736]: W0130 14:18:50.989856 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.990066 kubelet[2736]: E0130 14:18:50.990036 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.990296 kubelet[2736]: E0130 14:18:50.990269 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.990296 kubelet[2736]: W0130 14:18:50.990289 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.990506 kubelet[2736]: E0130 14:18:50.990426 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.990506 kubelet[2736]: E0130 14:18:50.990495 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.990506 kubelet[2736]: W0130 14:18:50.990505 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.990592 kubelet[2736]: E0130 14:18:50.990525 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.990984 kubelet[2736]: E0130 14:18:50.990965 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.990984 kubelet[2736]: W0130 14:18:50.990980 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.991063 kubelet[2736]: E0130 14:18:50.991004 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.991313 kubelet[2736]: E0130 14:18:50.991289 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.991313 kubelet[2736]: W0130 14:18:50.991307 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.991398 kubelet[2736]: E0130 14:18:50.991327 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.992033 kubelet[2736]: E0130 14:18:50.991644 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.992033 kubelet[2736]: W0130 14:18:50.991709 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.992033 kubelet[2736]: E0130 14:18:50.991732 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.992033 kubelet[2736]: E0130 14:18:50.991931 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.992033 kubelet[2736]: W0130 14:18:50.991943 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.992033 kubelet[2736]: E0130 14:18:50.991955 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.992384 kubelet[2736]: E0130 14:18:50.992133 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.992384 kubelet[2736]: W0130 14:18:50.992142 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.992384 kubelet[2736]: E0130 14:18:50.992172 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:50.992470 kubelet[2736]: E0130 14:18:50.992396 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:18:50.992470 kubelet[2736]: W0130 14:18:50.992405 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:18:50.992470 kubelet[2736]: E0130 14:18:50.992415 2736 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:18:51.748575 kubelet[2736]: E0130 14:18:51.748508 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:51.775324 containerd[1487]: time="2025-01-30T14:18:51.775187002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:51.776815 containerd[1487]: time="2025-01-30T14:18:51.776753792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 14:18:51.777850 containerd[1487]: time="2025-01-30T14:18:51.777795946Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:51.780517 containerd[1487]: time="2025-01-30T14:18:51.780452969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:51.781304 containerd[1487]: time="2025-01-30T14:18:51.781229764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.310318504s" Jan 30 14:18:51.781304 containerd[1487]: time="2025-01-30T14:18:51.781264724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:18:51.787955 containerd[1487]: time="2025-01-30T14:18:51.787774963Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:18:51.806534 containerd[1487]: time="2025-01-30T14:18:51.806414127Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6\"" Jan 30 14:18:51.809170 containerd[1487]: time="2025-01-30T14:18:51.807111763Z" level=info msg="StartContainer for \"7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6\"" Jan 30 14:18:51.848440 systemd[1]: Started cri-containerd-7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6.scope - libcontainer container 7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6. Jan 30 14:18:51.857663 kubelet[2736]: I0130 14:18:51.857615 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:18:51.882597 containerd[1487]: time="2025-01-30T14:18:51.882550132Z" level=info msg="StartContainer for \"7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6\" returns successfully" Jan 30 14:18:51.908716 systemd[1]: cri-containerd-7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6.scope: Deactivated successfully. Jan 30 14:18:51.935480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6-rootfs.mount: Deactivated successfully. Jan 30 14:18:52.037313 containerd[1487]: time="2025-01-30T14:18:52.037116251Z" level=info msg="shim disconnected" id=7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6 namespace=k8s.io Jan 30 14:18:52.037313 containerd[1487]: time="2025-01-30T14:18:52.037181691Z" level=warning msg="cleaning up after shim disconnected" id=7eac29e74a47a8ab794eedca4a8626e6f41570978200c54cddc369d97bf7bbc6 namespace=k8s.io Jan 30 14:18:52.037313 containerd[1487]: time="2025-01-30T14:18:52.037192651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:52.865626 containerd[1487]: time="2025-01-30T14:18:52.865553108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:18:52.886620 kubelet[2736]: I0130 14:18:52.885541 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-869f5c767d-q9wmd" podStartSLOduration=3.60808822 podStartE2EDuration="5.885523386s" podCreationTimestamp="2025-01-30 14:18:47 +0000 UTC" firstStartedPulling="2025-01-30 14:18:48.192967378 +0000 UTC m=+14.566235684" lastFinishedPulling="2025-01-30 14:18:50.470402424 +0000 UTC m=+16.843670850" observedRunningTime="2025-01-30 14:18:50.873558331 +0000 UTC m=+17.246826677" watchObservedRunningTime="2025-01-30 14:18:52.885523386 +0000 UTC m=+19.258791732" Jan 30 14:18:53.750669 kubelet[2736]: E0130 14:18:53.750122 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:54.225494 systemd[1]: Started sshd@17-157.90.246.176:22-185.146.232.60:37408.service - OpenSSH per-connection server daemon (185.146.232.60:37408). Jan 30 14:18:54.477994 sshd[3437]: Invalid user tony from 185.146.232.60 port 37408 Jan 30 14:18:54.511274 sshd[3437]: Received disconnect from 185.146.232.60 port 37408:11: Bye Bye [preauth] Jan 30 14:18:54.511274 sshd[3437]: Disconnected from invalid user tony 185.146.232.60 port 37408 [preauth] Jan 30 14:18:54.514188 systemd[1]: sshd@17-157.90.246.176:22-185.146.232.60:37408.service: Deactivated successfully. Jan 30 14:18:55.473423 containerd[1487]: time="2025-01-30T14:18:55.473344893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:55.475166 containerd[1487]: time="2025-01-30T14:18:55.475125483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:18:55.476657 containerd[1487]: time="2025-01-30T14:18:55.476582954Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:55.478816 containerd[1487]: time="2025-01-30T14:18:55.478748022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:18:55.479520 containerd[1487]: time="2025-01-30T14:18:55.479364418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.61376983s" Jan 30 14:18:55.479520 containerd[1487]: time="2025-01-30T14:18:55.479402978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:18:55.483369 containerd[1487]: time="2025-01-30T14:18:55.483137837Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:18:55.507232 containerd[1487]: time="2025-01-30T14:18:55.507153259Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0\"" Jan 30 14:18:55.509115 containerd[1487]: time="2025-01-30T14:18:55.507942174Z" level=info msg="StartContainer for \"18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0\"" Jan 30 14:18:55.556545 systemd[1]: Started cri-containerd-18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0.scope - libcontainer container 18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0. Jan 30 14:18:55.591969 containerd[1487]: time="2025-01-30T14:18:55.591837253Z" level=info msg="StartContainer for \"18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0\" returns successfully" Jan 30 14:18:55.749819 kubelet[2736]: E0130 14:18:55.749567 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:56.263394 containerd[1487]: time="2025-01-30T14:18:56.263341713Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:18:56.270521 systemd[1]: cri-containerd-18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0.scope: Deactivated successfully. Jan 30 14:18:56.300553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0-rootfs.mount: Deactivated successfully. Jan 30 14:18:56.360913 kubelet[2736]: I0130 14:18:56.359797 2736 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 14:18:56.410835 systemd[1]: Created slice kubepods-burstable-podbd268bd3_a239_4856_881d_2df8012160ba.slice - libcontainer container kubepods-burstable-podbd268bd3_a239_4856_881d_2df8012160ba.slice. Jan 30 14:18:56.422620 systemd[1]: Created slice kubepods-burstable-pode8c173ab_1e44_495a_b07f_a4b7866f3d63.slice - libcontainer container kubepods-burstable-pode8c173ab_1e44_495a_b07f_a4b7866f3d63.slice. Jan 30 14:18:56.428671 containerd[1487]: time="2025-01-30T14:18:56.428601905Z" level=info msg="shim disconnected" id=18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0 namespace=k8s.io Jan 30 14:18:56.428671 containerd[1487]: time="2025-01-30T14:18:56.428659105Z" level=warning msg="cleaning up after shim disconnected" id=18109047bb892b55dcc27ba9846e80d88fb3c174430e3b237cf6588e2df531b0 namespace=k8s.io Jan 30 14:18:56.428671 containerd[1487]: time="2025-01-30T14:18:56.428669185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:56.451462 kubelet[2736]: W0130 14:18:56.450493 2736 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-3-0-2-dd601a010b" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-2-dd601a010b' and this object Jan 30 14:18:56.451462 kubelet[2736]: E0130 14:18:56.450538 2736 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081-3-0-2-dd601a010b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-0-2-dd601a010b' and this object" logger="UnhandledError" Jan 30 14:18:56.451911 systemd[1]: Created slice kubepods-besteffort-pod6140804b_3226_47d5_b2f9_051be91afcb7.slice - libcontainer container kubepods-besteffort-pod6140804b_3226_47d5_b2f9_051be91afcb7.slice. Jan 30 14:18:56.455339 kubelet[2736]: W0130 14:18:56.454693 2736 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-2-dd601a010b" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-2-dd601a010b' and this object Jan 30 14:18:56.455536 kubelet[2736]: E0130 14:18:56.455495 2736 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-0-2-dd601a010b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-0-2-dd601a010b' and this object" logger="UnhandledError" Jan 30 14:18:56.470108 systemd[1]: Created slice kubepods-besteffort-pod54e7f95c_309e_483a_bdae_b233ec02fb58.slice - libcontainer container kubepods-besteffort-pod54e7f95c_309e_483a_bdae_b233ec02fb58.slice. Jan 30 14:18:56.480246 systemd[1]: Created slice kubepods-besteffort-pod0517adcd_3cf1_4360_be3e_43009d876448.slice - libcontainer container kubepods-besteffort-pod0517adcd_3cf1_4360_be3e_43009d876448.slice. Jan 30 14:18:56.525056 kubelet[2736]: I0130 14:18:56.524867 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd268bd3-a239-4856-881d-2df8012160ba-config-volume\") pod \"coredns-6f6b679f8f-zzmzg\" (UID: \"bd268bd3-a239-4856-881d-2df8012160ba\") " pod="kube-system/coredns-6f6b679f8f-zzmzg" Jan 30 14:18:56.525056 kubelet[2736]: I0130 14:18:56.524916 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sg6d\" (UniqueName: \"kubernetes.io/projected/e8c173ab-1e44-495a-b07f-a4b7866f3d63-kube-api-access-5sg6d\") pod \"coredns-6f6b679f8f-4sg8x\" (UID: \"e8c173ab-1e44-495a-b07f-a4b7866f3d63\") " pod="kube-system/coredns-6f6b679f8f-4sg8x" Jan 30 14:18:56.525056 kubelet[2736]: I0130 14:18:56.524955 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznfg\" (UniqueName: \"kubernetes.io/projected/6140804b-3226-47d5-b2f9-051be91afcb7-kube-api-access-mznfg\") pod \"calico-apiserver-5ddc4464bf-hv2l4\" (UID: \"6140804b-3226-47d5-b2f9-051be91afcb7\") " pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" Jan 30 14:18:56.525056 kubelet[2736]: I0130 14:18:56.524988 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqw4\" (UniqueName: \"kubernetes.io/projected/54e7f95c-309e-483a-bdae-b233ec02fb58-kube-api-access-gbqw4\") pod \"calico-kube-controllers-5c9c95dfbf-6n4hn\" (UID: \"54e7f95c-309e-483a-bdae-b233ec02fb58\") " pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" Jan 30 14:18:56.525056 kubelet[2736]: I0130 14:18:56.525007 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcs4v\" (UniqueName: \"kubernetes.io/projected/0517adcd-3cf1-4360-be3e-43009d876448-kube-api-access-rcs4v\") pod \"calico-apiserver-5ddc4464bf-hldb9\" (UID: \"0517adcd-3cf1-4360-be3e-43009d876448\") " pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" Jan 30 14:18:56.525457 kubelet[2736]: I0130 14:18:56.525024 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0517adcd-3cf1-4360-be3e-43009d876448-calico-apiserver-certs\") pod \"calico-apiserver-5ddc4464bf-hldb9\" (UID: \"0517adcd-3cf1-4360-be3e-43009d876448\") " pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" Jan 30 14:18:56.525457 kubelet[2736]: I0130 14:18:56.525056 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54e7f95c-309e-483a-bdae-b233ec02fb58-tigera-ca-bundle\") pod \"calico-kube-controllers-5c9c95dfbf-6n4hn\" (UID: \"54e7f95c-309e-483a-bdae-b233ec02fb58\") " pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" Jan 30 14:18:56.525457 kubelet[2736]: I0130 14:18:56.525076 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcfn\" (UniqueName: \"kubernetes.io/projected/bd268bd3-a239-4856-881d-2df8012160ba-kube-api-access-ktcfn\") pod \"coredns-6f6b679f8f-zzmzg\" (UID: \"bd268bd3-a239-4856-881d-2df8012160ba\") " pod="kube-system/coredns-6f6b679f8f-zzmzg" Jan 30 14:18:56.525457 kubelet[2736]: I0130 14:18:56.525129 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c173ab-1e44-495a-b07f-a4b7866f3d63-config-volume\") pod \"coredns-6f6b679f8f-4sg8x\" (UID: \"e8c173ab-1e44-495a-b07f-a4b7866f3d63\") " pod="kube-system/coredns-6f6b679f8f-4sg8x" Jan 30 14:18:56.525457 kubelet[2736]: I0130 14:18:56.525162 2736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6140804b-3226-47d5-b2f9-051be91afcb7-calico-apiserver-certs\") pod \"calico-apiserver-5ddc4464bf-hv2l4\" (UID: \"6140804b-3226-47d5-b2f9-051be91afcb7\") " pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" Jan 30 14:18:56.718094 containerd[1487]: time="2025-01-30T14:18:56.717970080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzmzg,Uid:bd268bd3-a239-4856-881d-2df8012160ba,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:56.732470 containerd[1487]: time="2025-01-30T14:18:56.732021281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4sg8x,Uid:e8c173ab-1e44-495a-b07f-a4b7866f3d63,Namespace:kube-system,Attempt:0,}" Jan 30 14:18:56.776656 containerd[1487]: time="2025-01-30T14:18:56.776292312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9c95dfbf-6n4hn,Uid:54e7f95c-309e-483a-bdae-b233ec02fb58,Namespace:calico-system,Attempt:0,}" Jan 30 14:18:56.869302 containerd[1487]: time="2025-01-30T14:18:56.869126190Z" level=error msg="Failed to destroy network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.869817 containerd[1487]: time="2025-01-30T14:18:56.869491028Z" level=error msg="encountered an error cleaning up failed sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.869817 containerd[1487]: time="2025-01-30T14:18:56.869549348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzmzg,Uid:bd268bd3-a239-4856-881d-2df8012160ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.869943 kubelet[2736]: E0130 14:18:56.869817 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.869943 kubelet[2736]: E0130 14:18:56.869896 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zzmzg" Jan 30 14:18:56.869943 kubelet[2736]: E0130 14:18:56.869917 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zzmzg" Jan 30 14:18:56.871668 kubelet[2736]: E0130 14:18:56.869959 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-zzmzg_kube-system(bd268bd3-a239-4856-881d-2df8012160ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-zzmzg_kube-system(bd268bd3-a239-4856-881d-2df8012160ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zzmzg" podUID="bd268bd3-a239-4856-881d-2df8012160ba" Jan 30 14:18:56.884001 containerd[1487]: time="2025-01-30T14:18:56.883623389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:18:56.887095 kubelet[2736]: I0130 14:18:56.886509 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:18:56.890314 containerd[1487]: time="2025-01-30T14:18:56.888253803Z" level=info msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" Jan 30 14:18:56.890314 containerd[1487]: time="2025-01-30T14:18:56.888437522Z" level=info msg="Ensure that sandbox 9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580 in task-service has been cleanup successfully" Jan 30 14:18:56.901280 containerd[1487]: time="2025-01-30T14:18:56.900682293Z" level=error msg="Failed to destroy network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.901280 containerd[1487]: time="2025-01-30T14:18:56.901028571Z" level=error msg="encountered an error cleaning up failed sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.901280 containerd[1487]: time="2025-01-30T14:18:56.901083051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4sg8x,Uid:e8c173ab-1e44-495a-b07f-a4b7866f3d63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.901474 kubelet[2736]: E0130 14:18:56.901301 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.901474 kubelet[2736]: E0130 14:18:56.901366 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4sg8x" Jan 30 14:18:56.901474 kubelet[2736]: E0130 14:18:56.901388 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4sg8x" Jan 30 14:18:56.901557 kubelet[2736]: E0130 14:18:56.901429 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4sg8x_kube-system(e8c173ab-1e44-495a-b07f-a4b7866f3d63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4sg8x_kube-system(e8c173ab-1e44-495a-b07f-a4b7866f3d63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4sg8x" podUID="e8c173ab-1e44-495a-b07f-a4b7866f3d63" Jan 30 14:18:56.945287 containerd[1487]: time="2025-01-30T14:18:56.945230283Z" level=error msg="Failed to destroy network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.945701 containerd[1487]: time="2025-01-30T14:18:56.945636601Z" level=error msg="encountered an error cleaning up failed sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.945755 containerd[1487]: time="2025-01-30T14:18:56.945700000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9c95dfbf-6n4hn,Uid:54e7f95c-309e-483a-bdae-b233ec02fb58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.947179 kubelet[2736]: E0130 14:18:56.945924 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.947179 kubelet[2736]: E0130 14:18:56.945990 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" Jan 30 14:18:56.947179 kubelet[2736]: E0130 14:18:56.946041 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" Jan 30 14:18:56.947411 kubelet[2736]: E0130 14:18:56.946100 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c9c95dfbf-6n4hn_calico-system(54e7f95c-309e-483a-bdae-b233ec02fb58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c9c95dfbf-6n4hn_calico-system(54e7f95c-309e-483a-bdae-b233ec02fb58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" podUID="54e7f95c-309e-483a-bdae-b233ec02fb58" Jan 30 14:18:56.957667 containerd[1487]: time="2025-01-30T14:18:56.957498374Z" level=error msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" failed" error="failed to destroy network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:56.958143 kubelet[2736]: E0130 14:18:56.958093 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:18:56.958728 kubelet[2736]: E0130 14:18:56.958157 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580"} Jan 30 14:18:56.958728 kubelet[2736]: E0130 14:18:56.958245 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd268bd3-a239-4856-881d-2df8012160ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:56.958728 kubelet[2736]: E0130 14:18:56.958268 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd268bd3-a239-4856-881d-2df8012160ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zzmzg" podUID="bd268bd3-a239-4856-881d-2df8012160ba" Jan 30 14:18:57.660183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580-shm.mount: Deactivated successfully. Jan 30 14:18:57.661351 containerd[1487]: time="2025-01-30T14:18:57.661308696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hv2l4,Uid:6140804b-3226-47d5-b2f9-051be91afcb7,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:18:57.689994 containerd[1487]: time="2025-01-30T14:18:57.689751420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hldb9,Uid:0517adcd-3cf1-4360-be3e-43009d876448,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:18:57.758782 systemd[1]: Created slice kubepods-besteffort-pod36f3a4c2_f842_4550_b82d_bc5e5af52ab2.slice - libcontainer container kubepods-besteffort-pod36f3a4c2_f842_4550_b82d_bc5e5af52ab2.slice. Jan 30 14:18:57.763489 containerd[1487]: time="2025-01-30T14:18:57.763431214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqhl,Uid:36f3a4c2-f842-4550-b82d-bc5e5af52ab2,Namespace:calico-system,Attempt:0,}" Jan 30 14:18:57.764315 containerd[1487]: time="2025-01-30T14:18:57.764169690Z" level=error msg="Failed to destroy network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.765056 containerd[1487]: time="2025-01-30T14:18:57.764898766Z" level=error msg="encountered an error cleaning up failed sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.765056 containerd[1487]: time="2025-01-30T14:18:57.764966846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hv2l4,Uid:6140804b-3226-47d5-b2f9-051be91afcb7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.765299 kubelet[2736]: E0130 14:18:57.765161 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.765299 kubelet[2736]: E0130 14:18:57.765277 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" Jan 30 14:18:57.765299 kubelet[2736]: E0130 14:18:57.765298 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" Jan 30 14:18:57.765464 kubelet[2736]: E0130 14:18:57.765339 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddc4464bf-hv2l4_calico-apiserver(6140804b-3226-47d5-b2f9-051be91afcb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddc4464bf-hv2l4_calico-apiserver(6140804b-3226-47d5-b2f9-051be91afcb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" podUID="6140804b-3226-47d5-b2f9-051be91afcb7" Jan 30 14:18:57.793661 containerd[1487]: time="2025-01-30T14:18:57.793610608Z" level=error msg="Failed to destroy network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.796421 containerd[1487]: time="2025-01-30T14:18:57.796355593Z" level=error msg="encountered an error cleaning up failed sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.796531 containerd[1487]: time="2025-01-30T14:18:57.796458152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hldb9,Uid:0517adcd-3cf1-4360-be3e-43009d876448,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.796747 kubelet[2736]: E0130 14:18:57.796707 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.796820 kubelet[2736]: E0130 14:18:57.796776 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" Jan 30 14:18:57.796820 kubelet[2736]: E0130 14:18:57.796801 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" Jan 30 14:18:57.796957 kubelet[2736]: E0130 14:18:57.796854 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddc4464bf-hldb9_calico-apiserver(0517adcd-3cf1-4360-be3e-43009d876448)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddc4464bf-hldb9_calico-apiserver(0517adcd-3cf1-4360-be3e-43009d876448)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" podUID="0517adcd-3cf1-4360-be3e-43009d876448" Jan 30 14:18:57.839215 containerd[1487]: time="2025-01-30T14:18:57.839154757Z" level=error msg="Failed to destroy network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.839554 containerd[1487]: time="2025-01-30T14:18:57.839524555Z" level=error msg="encountered an error cleaning up failed sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.839629 containerd[1487]: time="2025-01-30T14:18:57.839602795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqhl,Uid:36f3a4c2-f842-4550-b82d-bc5e5af52ab2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.840269 kubelet[2736]: E0130 14:18:57.839842 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.840269 kubelet[2736]: E0130 14:18:57.839901 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:57.840269 kubelet[2736]: E0130 14:18:57.839919 2736 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cqhl" Jan 30 14:18:57.841877 kubelet[2736]: E0130 14:18:57.839976 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7cqhl_calico-system(36f3a4c2-f842-4550-b82d-bc5e5af52ab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7cqhl_calico-system(36f3a4c2-f842-4550-b82d-bc5e5af52ab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:57.892970 kubelet[2736]: I0130 14:18:57.892915 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:18:57.894936 containerd[1487]: time="2025-01-30T14:18:57.894413813Z" level=info msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" Jan 30 14:18:57.894936 containerd[1487]: time="2025-01-30T14:18:57.894610012Z" level=info msg="Ensure that sandbox f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6 in task-service has been cleanup successfully" Jan 30 14:18:57.895551 kubelet[2736]: I0130 14:18:57.895519 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:18:57.901396 containerd[1487]: time="2025-01-30T14:18:57.900687379Z" level=info msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" Jan 30 14:18:57.901396 containerd[1487]: time="2025-01-30T14:18:57.900888218Z" level=info msg="Ensure that sandbox f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff in task-service has been cleanup successfully" Jan 30 14:18:57.908754 kubelet[2736]: I0130 14:18:57.908693 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:18:57.910012 containerd[1487]: time="2025-01-30T14:18:57.909680609Z" level=info msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" Jan 30 14:18:57.910342 containerd[1487]: time="2025-01-30T14:18:57.910240726Z" level=info msg="Ensure that sandbox efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e in task-service has been cleanup successfully" Jan 30 14:18:57.919698 kubelet[2736]: I0130 14:18:57.919417 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:18:57.922274 containerd[1487]: time="2025-01-30T14:18:57.920895228Z" level=info msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" Jan 30 14:18:57.922274 containerd[1487]: time="2025-01-30T14:18:57.921107626Z" level=info msg="Ensure that sandbox 04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4 in task-service has been cleanup successfully" Jan 30 14:18:57.926176 kubelet[2736]: I0130 14:18:57.925132 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:18:57.932219 containerd[1487]: time="2025-01-30T14:18:57.932144646Z" level=info msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" Jan 30 14:18:57.935274 containerd[1487]: time="2025-01-30T14:18:57.935117149Z" level=info msg="Ensure that sandbox bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245 in task-service has been cleanup successfully" Jan 30 14:18:57.968365 containerd[1487]: time="2025-01-30T14:18:57.968313967Z" level=error msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" failed" error="failed to destroy network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.970073 kubelet[2736]: E0130 14:18:57.969898 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:18:57.970073 kubelet[2736]: E0130 14:18:57.969956 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff"} Jan 30 14:18:57.970073 kubelet[2736]: E0130 14:18:57.969991 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6140804b-3226-47d5-b2f9-051be91afcb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:57.970073 kubelet[2736]: E0130 14:18:57.970016 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6140804b-3226-47d5-b2f9-051be91afcb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" podUID="6140804b-3226-47d5-b2f9-051be91afcb7" Jan 30 14:18:57.981758 containerd[1487]: time="2025-01-30T14:18:57.981691293Z" level=error msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" failed" error="failed to destroy network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.983257 kubelet[2736]: E0130 14:18:57.983208 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:18:57.983607 kubelet[2736]: E0130 14:18:57.983452 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6"} Jan 30 14:18:57.983607 kubelet[2736]: E0130 14:18:57.983508 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0517adcd-3cf1-4360-be3e-43009d876448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:57.983607 kubelet[2736]: E0130 14:18:57.983546 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0517adcd-3cf1-4360-be3e-43009d876448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" podUID="0517adcd-3cf1-4360-be3e-43009d876448" Jan 30 14:18:57.995475 containerd[1487]: time="2025-01-30T14:18:57.995418017Z" level=error msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" failed" error="failed to destroy network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:57.995778 kubelet[2736]: E0130 14:18:57.995739 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:18:57.995992 kubelet[2736]: E0130 14:18:57.995899 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e"} Jan 30 14:18:57.995992 kubelet[2736]: E0130 14:18:57.995937 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:57.995992 kubelet[2736]: E0130 14:18:57.995964 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36f3a4c2-f842-4550-b82d-bc5e5af52ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cqhl" podUID="36f3a4c2-f842-4550-b82d-bc5e5af52ab2" Jan 30 14:18:58.018970 containerd[1487]: time="2025-01-30T14:18:58.018919250Z" level=error msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" failed" error="failed to destroy network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:58.019376 kubelet[2736]: E0130 14:18:58.019327 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:18:58.019657 kubelet[2736]: E0130 14:18:58.019631 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4"} Jan 30 14:18:58.019788 kubelet[2736]: E0130 14:18:58.019767 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54e7f95c-309e-483a-bdae-b233ec02fb58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:58.019912 kubelet[2736]: E0130 14:18:58.019888 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54e7f95c-309e-483a-bdae-b233ec02fb58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" podUID="54e7f95c-309e-483a-bdae-b233ec02fb58" Jan 30 14:18:58.023723 containerd[1487]: time="2025-01-30T14:18:58.023675664Z" level=error msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" failed" error="failed to destroy network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:18:58.024374 kubelet[2736]: E0130 14:18:58.024232 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:18:58.024374 kubelet[2736]: E0130 14:18:58.024314 2736 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245"} Jan 30 14:18:58.024374 kubelet[2736]: E0130 14:18:58.024350 2736 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8c173ab-1e44-495a-b07f-a4b7866f3d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:18:58.024663 kubelet[2736]: E0130 14:18:58.024608 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8c173ab-1e44-495a-b07f-a4b7866f3d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4sg8x" podUID="e8c173ab-1e44-495a-b07f-a4b7866f3d63" Jan 30 14:18:58.654205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6-shm.mount: Deactivated successfully. Jan 30 14:18:58.654353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff-shm.mount: Deactivated successfully. Jan 30 14:19:01.188162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191342229.mount: Deactivated successfully. Jan 30 14:19:01.228836 containerd[1487]: time="2025-01-30T14:19:01.228768170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:01.229994 containerd[1487]: time="2025-01-30T14:19:01.229816164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:19:01.232306 containerd[1487]: time="2025-01-30T14:19:01.231025998Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:01.234466 containerd[1487]: time="2025-01-30T14:19:01.233705225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:01.234466 containerd[1487]: time="2025-01-30T14:19:01.234308662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.349803878s" Jan 30 14:19:01.234466 containerd[1487]: time="2025-01-30T14:19:01.234351621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:19:01.250954 containerd[1487]: time="2025-01-30T14:19:01.250908497Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:19:01.269165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525183636.mount: Deactivated successfully. Jan 30 14:19:01.272977 containerd[1487]: time="2025-01-30T14:19:01.272350468Z" level=info msg="CreateContainer within sandbox \"3033a37ea6cd234f2fb1a924f6c65fd44c542a5c0dbcdb7ee4f5c2934a90575f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e\"" Jan 30 14:19:01.274183 containerd[1487]: time="2025-01-30T14:19:01.274151539Z" level=info msg="StartContainer for \"34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e\"" Jan 30 14:19:01.314960 systemd[1]: Started cri-containerd-34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e.scope - libcontainer container 34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e. Jan 30 14:19:01.363627 containerd[1487]: time="2025-01-30T14:19:01.363438286Z" level=info msg="StartContainer for \"34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e\" returns successfully" Jan 30 14:19:01.482739 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:19:01.482877 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:19:01.970087 kubelet[2736]: I0130 14:19:01.970000 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6crts" podStartSLOduration=1.997980455 podStartE2EDuration="14.969984244s" podCreationTimestamp="2025-01-30 14:18:47 +0000 UTC" firstStartedPulling="2025-01-30 14:18:48.263633546 +0000 UTC m=+14.636901892" lastFinishedPulling="2025-01-30 14:19:01.235637335 +0000 UTC m=+27.608905681" observedRunningTime="2025-01-30 14:19:01.969426846 +0000 UTC m=+28.342695192" watchObservedRunningTime="2025-01-30 14:19:01.969984244 +0000 UTC m=+28.343252590" Jan 30 14:19:07.568103 kubelet[2736]: I0130 14:19:07.567624 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:19:08.399293 kernel: bpftool[4114]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:19:08.683439 systemd-networkd[1379]: vxlan.calico: Link UP Jan 30 14:19:08.683461 systemd-networkd[1379]: vxlan.calico: Gained carrier Jan 30 14:19:09.750824 containerd[1487]: time="2025-01-30T14:19:09.750732953Z" level=info msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.836 [INFO][4239] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.837 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" iface="eth0" netns="/var/run/netns/cni-982d1cfc-a767-7c65-6bf9-28d6b1bd354a" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.838 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" iface="eth0" netns="/var/run/netns/cni-982d1cfc-a767-7c65-6bf9-28d6b1bd354a" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.839 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" iface="eth0" netns="/var/run/netns/cni-982d1cfc-a767-7c65-6bf9-28d6b1bd354a" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.839 [INFO][4239] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.839 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.878 [INFO][4245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.878 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.878 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.891 [WARNING][4245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.891 [INFO][4245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.895 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:09.901547 containerd[1487]: 2025-01-30 14:19:09.898 [INFO][4239] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:09.903969 systemd[1]: run-netns-cni\x2d982d1cfc\x2da767\x2d7c65\x2d6bf9\x2d28d6b1bd354a.mount: Deactivated successfully. Jan 30 14:19:09.904773 containerd[1487]: time="2025-01-30T14:19:09.904581759Z" level=info msg="TearDown network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" successfully" Jan 30 14:19:09.904773 containerd[1487]: time="2025-01-30T14:19:09.904619679Z" level=info msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" returns successfully" Jan 30 14:19:09.905764 containerd[1487]: time="2025-01-30T14:19:09.905721314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hv2l4,Uid:6140804b-3226-47d5-b2f9-051be91afcb7,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:19:10.106379 systemd-networkd[1379]: calibf5f1a158c2: Link UP Jan 30 14:19:10.107587 systemd-networkd[1379]: calibf5f1a158c2: Gained carrier Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:09.995 [INFO][4254] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0 calico-apiserver-5ddc4464bf- calico-apiserver 6140804b-3226-47d5-b2f9-051be91afcb7 760 0 2025-01-30 14:18:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddc4464bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b calico-apiserver-5ddc4464bf-hv2l4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf5f1a158c2 [] []}} ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:09.995 [INFO][4254] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.030 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" HandleID="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.050 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" HandleID="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-dd601a010b", "pod":"calico-apiserver-5ddc4464bf-hv2l4", "timestamp":"2025-01-30 14:19:10.03026937 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.050 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.050 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.050 [INFO][4265] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.054 [INFO][4265] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.061 [INFO][4265] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.068 [INFO][4265] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.072 [INFO][4265] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.076 [INFO][4265] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.076 [INFO][4265] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.079 [INFO][4265] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0 Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.086 [INFO][4265] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.096 [INFO][4265] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.65/26] block=192.168.40.64/26 handle="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.096 [INFO][4265] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.65/26] handle="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.096 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:10.128887 containerd[1487]: 2025-01-30 14:19:10.096 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.65/26] IPv6=[] ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" HandleID="k8s-pod-network.ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.100 [INFO][4254] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6140804b-3226-47d5-b2f9-051be91afcb7", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"calico-apiserver-5ddc4464bf-hv2l4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf5f1a158c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.100 [INFO][4254] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.65/32] ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.101 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf5f1a158c2 ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.106 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.107 [INFO][4254] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6140804b-3226-47d5-b2f9-051be91afcb7", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0", Pod:"calico-apiserver-5ddc4464bf-hv2l4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf5f1a158c2", MAC:"86:d6:d5:b1:4e:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:10.129936 containerd[1487]: 2025-01-30 14:19:10.124 [INFO][4254] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hv2l4" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:10.159080 containerd[1487]: time="2025-01-30T14:19:10.158874296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:10.159080 containerd[1487]: time="2025-01-30T14:19:10.158994216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:10.159080 containerd[1487]: time="2025-01-30T14:19:10.159023776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:10.159406 containerd[1487]: time="2025-01-30T14:19:10.159131255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:10.190800 systemd[1]: Started cri-containerd-ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0.scope - libcontainer container ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0. Jan 30 14:19:10.238358 containerd[1487]: time="2025-01-30T14:19:10.237714597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hv2l4,Uid:6140804b-3226-47d5-b2f9-051be91afcb7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0\"" Jan 30 14:19:10.243150 containerd[1487]: time="2025-01-30T14:19:10.242405617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:19:10.518577 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jan 30 14:19:10.753642 containerd[1487]: time="2025-01-30T14:19:10.750655748Z" level=info msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.815 [INFO][4338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.816 [INFO][4338] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" iface="eth0" netns="/var/run/netns/cni-3abdee65-e7e4-f1c0-44aa-82977582c81a" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.816 [INFO][4338] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" iface="eth0" netns="/var/run/netns/cni-3abdee65-e7e4-f1c0-44aa-82977582c81a" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.816 [INFO][4338] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" iface="eth0" netns="/var/run/netns/cni-3abdee65-e7e4-f1c0-44aa-82977582c81a" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.816 [INFO][4338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.816 [INFO][4338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.838 [INFO][4344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.838 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.838 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.848 [WARNING][4344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.849 [INFO][4344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.851 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:10.857242 containerd[1487]: 2025-01-30 14:19:10.853 [INFO][4338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:10.858161 containerd[1487]: time="2025-01-30T14:19:10.857969485Z" level=info msg="TearDown network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" successfully" Jan 30 14:19:10.858161 containerd[1487]: time="2025-01-30T14:19:10.858021885Z" level=info msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" returns successfully" Jan 30 14:19:10.859254 containerd[1487]: time="2025-01-30T14:19:10.858859002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9c95dfbf-6n4hn,Uid:54e7f95c-309e-483a-bdae-b233ec02fb58,Namespace:calico-system,Attempt:1,}" Jan 30 14:19:10.909330 systemd[1]: run-netns-cni\x2d3abdee65\x2de7e4\x2df1c0\x2d44aa\x2d82977582c81a.mount: Deactivated successfully. Jan 30 14:19:11.044852 systemd-networkd[1379]: calic1b5c6d1991: Link UP Jan 30 14:19:11.046089 systemd-networkd[1379]: calic1b5c6d1991: Gained carrier Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.923 [INFO][4351] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0 calico-kube-controllers-5c9c95dfbf- calico-system 54e7f95c-309e-483a-bdae-b233ec02fb58 767 0 2025-01-30 14:18:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c9c95dfbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b calico-kube-controllers-5c9c95dfbf-6n4hn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1b5c6d1991 [] []}} ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.923 [INFO][4351] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.966 [INFO][4361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" HandleID="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.987 [INFO][4361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" HandleID="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-dd601a010b", "pod":"calico-kube-controllers-5c9c95dfbf-6n4hn", "timestamp":"2025-01-30 14:19:10.96609754 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.987 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.987 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.987 [INFO][4361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.991 [INFO][4361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:10.999 [INFO][4361] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.007 [INFO][4361] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.010 [INFO][4361] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.013 [INFO][4361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.013 [INFO][4361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.016 [INFO][4361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3 Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.022 [INFO][4361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.039 [INFO][4361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.66/26] block=192.168.40.64/26 handle="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.039 [INFO][4361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.66/26] handle="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.039 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:11.071323 containerd[1487]: 2025-01-30 14:19:11.039 [INFO][4361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.66/26] IPv6=[] ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" HandleID="k8s-pod-network.6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.041 [INFO][4351] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0", GenerateName:"calico-kube-controllers-5c9c95dfbf-", Namespace:"calico-system", SelfLink:"", UID:"54e7f95c-309e-483a-bdae-b233ec02fb58", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c9c95dfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"calico-kube-controllers-5c9c95dfbf-6n4hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1b5c6d1991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.041 [INFO][4351] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.66/32] ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.041 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1b5c6d1991 ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.045 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.047 [INFO][4351] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0", GenerateName:"calico-kube-controllers-5c9c95dfbf-", Namespace:"calico-system", SelfLink:"", UID:"54e7f95c-309e-483a-bdae-b233ec02fb58", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c9c95dfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3", Pod:"calico-kube-controllers-5c9c95dfbf-6n4hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1b5c6d1991", MAC:"2e:18:c0:32:09:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:11.073474 containerd[1487]: 2025-01-30 14:19:11.068 [INFO][4351] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3" Namespace="calico-system" Pod="calico-kube-controllers-5c9c95dfbf-6n4hn" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:11.096482 containerd[1487]: time="2025-01-30T14:19:11.096349946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:11.096773 containerd[1487]: time="2025-01-30T14:19:11.096413345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:11.096773 containerd[1487]: time="2025-01-30T14:19:11.096454745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:11.096773 containerd[1487]: time="2025-01-30T14:19:11.096547985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:11.121471 systemd[1]: Started cri-containerd-6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3.scope - libcontainer container 6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3. Jan 30 14:19:11.160639 systemd-networkd[1379]: calibf5f1a158c2: Gained IPv6LL Jan 30 14:19:11.165990 containerd[1487]: time="2025-01-30T14:19:11.165938691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9c95dfbf-6n4hn,Uid:54e7f95c-309e-483a-bdae-b233ec02fb58,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3\"" Jan 30 14:19:11.753263 containerd[1487]: time="2025-01-30T14:19:11.752137849Z" level=info msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" Jan 30 14:19:11.754036 containerd[1487]: time="2025-01-30T14:19:11.753687003Z" level=info msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" Jan 30 14:19:11.755974 containerd[1487]: time="2025-01-30T14:19:11.755805714Z" level=info msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.846 [INFO][4453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.848 [INFO][4453] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" iface="eth0" netns="/var/run/netns/cni-19ccc3e1-0b5a-89a7-55ca-3648557ed3d4" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.848 [INFO][4453] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" iface="eth0" netns="/var/run/netns/cni-19ccc3e1-0b5a-89a7-55ca-3648557ed3d4" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.849 [INFO][4453] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" iface="eth0" netns="/var/run/netns/cni-19ccc3e1-0b5a-89a7-55ca-3648557ed3d4" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.849 [INFO][4453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.850 [INFO][4453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.928 [INFO][4483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.929 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.929 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.947 [WARNING][4483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.947 [INFO][4483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.951 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:11.955512 containerd[1487]: 2025-01-30 14:19:11.954 [INFO][4453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:11.957528 containerd[1487]: time="2025-01-30T14:19:11.956086586Z" level=info msg="TearDown network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" successfully" Jan 30 14:19:11.957528 containerd[1487]: time="2025-01-30T14:19:11.957459220Z" level=info msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" returns successfully" Jan 30 14:19:11.959121 systemd[1]: run-netns-cni\x2d19ccc3e1\x2d0b5a\x2d89a7\x2d55ca\x2d3648557ed3d4.mount: Deactivated successfully. Jan 30 14:19:11.961972 containerd[1487]: time="2025-01-30T14:19:11.961710002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzmzg,Uid:bd268bd3-a239-4856-881d-2df8012160ba,Namespace:kube-system,Attempt:1,}" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.870 [INFO][4468] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.870 [INFO][4468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" iface="eth0" netns="/var/run/netns/cni-f6a00eef-765c-25c8-bc00-5f2179dbf206" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.872 [INFO][4468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" iface="eth0" netns="/var/run/netns/cni-f6a00eef-765c-25c8-bc00-5f2179dbf206" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.872 [INFO][4468] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" iface="eth0" netns="/var/run/netns/cni-f6a00eef-765c-25c8-bc00-5f2179dbf206" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.872 [INFO][4468] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.872 [INFO][4468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.938 [INFO][4487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.939 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.951 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.966 [WARNING][4487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.966 [INFO][4487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.968 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:11.975853 containerd[1487]: 2025-01-30 14:19:11.972 [INFO][4468] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:11.978368 containerd[1487]: time="2025-01-30T14:19:11.976730218Z" level=info msg="TearDown network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" successfully" Jan 30 14:19:11.978368 containerd[1487]: time="2025-01-30T14:19:11.976867538Z" level=info msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" returns successfully" Jan 30 14:19:11.983098 containerd[1487]: time="2025-01-30T14:19:11.982384594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hldb9,Uid:0517adcd-3cf1-4360-be3e-43009d876448,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:19:11.982859 systemd[1]: run-netns-cni\x2df6a00eef\x2d765c\x2d25c8\x2dbc00\x2d5f2179dbf206.mount: Deactivated successfully. Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.885 [INFO][4469] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.886 [INFO][4469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" iface="eth0" netns="/var/run/netns/cni-12fcaa3f-7c68-2b3c-71bc-1d3bcc93b886" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.886 [INFO][4469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" iface="eth0" netns="/var/run/netns/cni-12fcaa3f-7c68-2b3c-71bc-1d3bcc93b886" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.887 [INFO][4469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" iface="eth0" netns="/var/run/netns/cni-12fcaa3f-7c68-2b3c-71bc-1d3bcc93b886" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.887 [INFO][4469] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.889 [INFO][4469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.949 [INFO][4491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.949 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.968 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.993 [WARNING][4491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:11.993 [INFO][4491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:12.001 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:12.012558 containerd[1487]: 2025-01-30 14:19:12.003 [INFO][4469] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:12.015851 containerd[1487]: time="2025-01-30T14:19:12.015265656Z" level=info msg="TearDown network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" successfully" Jan 30 14:19:12.015851 containerd[1487]: time="2025-01-30T14:19:12.015395376Z" level=info msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" returns successfully" Jan 30 14:19:12.016083 systemd[1]: run-netns-cni\x2d12fcaa3f\x2d7c68\x2d2b3c\x2d71bc\x2d1d3bcc93b886.mount: Deactivated successfully. Jan 30 14:19:12.019277 containerd[1487]: time="2025-01-30T14:19:12.018859361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4sg8x,Uid:e8c173ab-1e44-495a-b07f-a4b7866f3d63,Namespace:kube-system,Attempt:1,}" Jan 30 14:19:12.257021 systemd-networkd[1379]: cali79f81038a4e: Link UP Jan 30 14:19:12.257514 systemd-networkd[1379]: cali79f81038a4e: Gained carrier Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.081 [INFO][4504] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0 coredns-6f6b679f8f- kube-system bd268bd3-a239-4856-881d-2df8012160ba 777 0 2025-01-30 14:18:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b coredns-6f6b679f8f-zzmzg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79f81038a4e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.081 [INFO][4504] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.149 [INFO][4539] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" HandleID="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.182 [INFO][4539] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" HandleID="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000302570), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-dd601a010b", "pod":"coredns-6f6b679f8f-zzmzg", "timestamp":"2025-01-30 14:19:12.149721376 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.184 [INFO][4539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.185 [INFO][4539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.185 [INFO][4539] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.192 [INFO][4539] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.199 [INFO][4539] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.209 [INFO][4539] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.214 [INFO][4539] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.219 [INFO][4539] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.219 [INFO][4539] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.223 [INFO][4539] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17 Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.235 [INFO][4539] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.246 [INFO][4539] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.67/26] block=192.168.40.64/26 handle="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.246 [INFO][4539] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.67/26] handle="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.246 [INFO][4539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:12.273876 containerd[1487]: 2025-01-30 14:19:12.246 [INFO][4539] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.67/26] IPv6=[] ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" HandleID="k8s-pod-network.2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.251 [INFO][4504] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bd268bd3-a239-4856-881d-2df8012160ba", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"coredns-6f6b679f8f-zzmzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f81038a4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.252 [INFO][4504] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.67/32] ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.252 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79f81038a4e ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.257 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.258 [INFO][4504] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bd268bd3-a239-4856-881d-2df8012160ba", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17", Pod:"coredns-6f6b679f8f-zzmzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f81038a4e", MAC:"a2:7a:66:76:9c:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.275064 containerd[1487]: 2025-01-30 14:19:12.271 [INFO][4504] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzmzg" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:12.308578 containerd[1487]: time="2025-01-30T14:19:12.308432716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:12.308889 containerd[1487]: time="2025-01-30T14:19:12.308642475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:12.309186 containerd[1487]: time="2025-01-30T14:19:12.309142433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.310086 containerd[1487]: time="2025-01-30T14:19:12.309894870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.332636 systemd[1]: Started cri-containerd-2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17.scope - libcontainer container 2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17. Jan 30 14:19:12.351245 systemd-networkd[1379]: caliaea34b63e41: Link UP Jan 30 14:19:12.355113 systemd-networkd[1379]: caliaea34b63e41: Gained carrier Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.096 [INFO][4513] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0 calico-apiserver-5ddc4464bf- calico-apiserver 0517adcd-3cf1-4360-be3e-43009d876448 778 0 2025-01-30 14:18:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddc4464bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b calico-apiserver-5ddc4464bf-hldb9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaea34b63e41 [] []}} ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.096 [INFO][4513] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.162 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" HandleID="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.197 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" HandleID="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004c5550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-dd601a010b", "pod":"calico-apiserver-5ddc4464bf-hldb9", "timestamp":"2025-01-30 14:19:12.162976441 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.197 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.246 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.247 [INFO][4544] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.292 [INFO][4544] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.299 [INFO][4544] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.309 [INFO][4544] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.312 [INFO][4544] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.316 [INFO][4544] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.316 [INFO][4544] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.319 [INFO][4544] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.328 [INFO][4544] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.341 [INFO][4544] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.68/26] block=192.168.40.64/26 handle="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.342 [INFO][4544] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.68/26] handle="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.342 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:12.377219 containerd[1487]: 2025-01-30 14:19:12.344 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.68/26] IPv6=[] ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" HandleID="k8s-pod-network.c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.346 [INFO][4513] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0517adcd-3cf1-4360-be3e-43009d876448", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"calico-apiserver-5ddc4464bf-hldb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea34b63e41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.347 [INFO][4513] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.68/32] ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.347 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaea34b63e41 ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.354 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.357 [INFO][4513] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0517adcd-3cf1-4360-be3e-43009d876448", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b", Pod:"calico-apiserver-5ddc4464bf-hldb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea34b63e41", MAC:"e6:a7:ca:71:29:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.378247 containerd[1487]: 2025-01-30 14:19:12.374 [INFO][4513] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b" Namespace="calico-apiserver" Pod="calico-apiserver-5ddc4464bf-hldb9" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:12.418142 containerd[1487]: time="2025-01-30T14:19:12.417174423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:12.418142 containerd[1487]: time="2025-01-30T14:19:12.417709861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:12.418142 containerd[1487]: time="2025-01-30T14:19:12.417729541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.419265 containerd[1487]: time="2025-01-30T14:19:12.419190095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzmzg,Uid:bd268bd3-a239-4856-881d-2df8012160ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17\"" Jan 30 14:19:12.421252 containerd[1487]: time="2025-01-30T14:19:12.420513689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.432023 containerd[1487]: time="2025-01-30T14:19:12.431735202Z" level=info msg="CreateContainer within sandbox \"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:19:12.438441 systemd-networkd[1379]: calic1b5c6d1991: Gained IPv6LL Jan 30 14:19:12.478464 systemd[1]: Started cri-containerd-c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b.scope - libcontainer container c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b. Jan 30 14:19:12.487549 containerd[1487]: time="2025-01-30T14:19:12.487498770Z" level=info msg="CreateContainer within sandbox \"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5581d3138a43483510747614fecbe979df99ac076fc5f4c544dc4305a818db9\"" Jan 30 14:19:12.490182 containerd[1487]: time="2025-01-30T14:19:12.490017600Z" level=info msg="StartContainer for \"f5581d3138a43483510747614fecbe979df99ac076fc5f4c544dc4305a818db9\"" Jan 30 14:19:12.507702 systemd-networkd[1379]: cali7949b5887a1: Link UP Jan 30 14:19:12.510327 systemd-networkd[1379]: cali7949b5887a1: Gained carrier Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.141 [INFO][4524] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0 coredns-6f6b679f8f- kube-system e8c173ab-1e44-495a-b07f-a4b7866f3d63 779 0 2025-01-30 14:18:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b coredns-6f6b679f8f-4sg8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7949b5887a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.141 [INFO][4524] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.198 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" HandleID="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.222 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" HandleID="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317410), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-dd601a010b", "pod":"coredns-6f6b679f8f-4sg8x", "timestamp":"2025-01-30 14:19:12.198731452 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.222 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.342 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.342 [INFO][4554] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.395 [INFO][4554] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.414 [INFO][4554] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.429 [INFO][4554] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.435 [INFO][4554] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.444 [INFO][4554] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.444 [INFO][4554] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.450 [INFO][4554] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.473 [INFO][4554] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.485 [INFO][4554] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.69/26] block=192.168.40.64/26 handle="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.485 [INFO][4554] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.69/26] handle="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.486 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:12.543172 containerd[1487]: 2025-01-30 14:19:12.486 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.69/26] IPv6=[] ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" HandleID="k8s-pod-network.1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.492 [INFO][4524] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e8c173ab-1e44-495a-b07f-a4b7866f3d63", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"coredns-6f6b679f8f-4sg8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7949b5887a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.492 [INFO][4524] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.69/32] ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.492 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7949b5887a1 ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.508 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.511 [INFO][4524] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e8c173ab-1e44-495a-b07f-a4b7866f3d63", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd", Pod:"coredns-6f6b679f8f-4sg8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7949b5887a1", MAC:"16:5d:48:bb:7d:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:12.543768 containerd[1487]: 2025-01-30 14:19:12.538 [INFO][4524] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd" Namespace="kube-system" Pod="coredns-6f6b679f8f-4sg8x" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:12.557842 systemd[1]: Started cri-containerd-f5581d3138a43483510747614fecbe979df99ac076fc5f4c544dc4305a818db9.scope - libcontainer container f5581d3138a43483510747614fecbe979df99ac076fc5f4c544dc4305a818db9. Jan 30 14:19:12.565162 containerd[1487]: time="2025-01-30T14:19:12.564187531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddc4464bf-hldb9,Uid:0517adcd-3cf1-4360-be3e-43009d876448,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b\"" Jan 30 14:19:12.586136 containerd[1487]: time="2025-01-30T14:19:12.583939769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:12.586136 containerd[1487]: time="2025-01-30T14:19:12.584061048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:12.586136 containerd[1487]: time="2025-01-30T14:19:12.584078248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.590604 containerd[1487]: time="2025-01-30T14:19:12.589159747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:12.610168 containerd[1487]: time="2025-01-30T14:19:12.610065980Z" level=info msg="StartContainer for \"f5581d3138a43483510747614fecbe979df99ac076fc5f4c544dc4305a818db9\" returns successfully" Jan 30 14:19:12.635438 systemd[1]: Started cri-containerd-1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd.scope - libcontainer container 1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd. Jan 30 14:19:12.676252 containerd[1487]: time="2025-01-30T14:19:12.676209225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4sg8x,Uid:e8c173ab-1e44-495a-b07f-a4b7866f3d63,Namespace:kube-system,Attempt:1,} returns sandbox id \"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd\"" Jan 30 14:19:12.683159 containerd[1487]: time="2025-01-30T14:19:12.683108196Z" level=info msg="CreateContainer within sandbox \"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:19:12.700272 containerd[1487]: time="2025-01-30T14:19:12.698394012Z" level=info msg="CreateContainer within sandbox \"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"288d21542c7b04c3a01813d70dfdc05258fffce06bca479842f67441161e7ba9\"" Jan 30 14:19:12.700872 containerd[1487]: time="2025-01-30T14:19:12.700831642Z" level=info msg="StartContainer for \"288d21542c7b04c3a01813d70dfdc05258fffce06bca479842f67441161e7ba9\"" Jan 30 14:19:12.738161 systemd[1]: Started cri-containerd-288d21542c7b04c3a01813d70dfdc05258fffce06bca479842f67441161e7ba9.scope - libcontainer container 288d21542c7b04c3a01813d70dfdc05258fffce06bca479842f67441161e7ba9. Jan 30 14:19:12.750842 containerd[1487]: time="2025-01-30T14:19:12.750428116Z" level=info msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" Jan 30 14:19:12.779707 containerd[1487]: time="2025-01-30T14:19:12.779450915Z" level=info msg="StartContainer for \"288d21542c7b04c3a01813d70dfdc05258fffce06bca479842f67441161e7ba9\" returns successfully" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.848 [INFO][4805] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.849 [INFO][4805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" iface="eth0" netns="/var/run/netns/cni-20998e91-a999-c550-a163-6208a72d3fb9" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.852 [INFO][4805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" iface="eth0" netns="/var/run/netns/cni-20998e91-a999-c550-a163-6208a72d3fb9" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.853 [INFO][4805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" iface="eth0" netns="/var/run/netns/cni-20998e91-a999-c550-a163-6208a72d3fb9" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.854 [INFO][4805] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.855 [INFO][4805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.881 [INFO][4817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.881 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.881 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.891 [WARNING][4817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.891 [INFO][4817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.894 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:12.901312 containerd[1487]: 2025-01-30 14:19:12.898 [INFO][4805] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:12.901312 containerd[1487]: time="2025-01-30T14:19:12.901270728Z" level=info msg="TearDown network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" successfully" Jan 30 14:19:12.901312 containerd[1487]: time="2025-01-30T14:19:12.901312248Z" level=info msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" returns successfully" Jan 30 14:19:12.902501 containerd[1487]: time="2025-01-30T14:19:12.902342203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqhl,Uid:36f3a4c2-f842-4550-b82d-bc5e5af52ab2,Namespace:calico-system,Attempt:1,}" Jan 30 14:19:12.981022 systemd[1]: run-netns-cni\x2d20998e91\x2da999\x2dc550\x2da163\x2d6208a72d3fb9.mount: Deactivated successfully. Jan 30 14:19:13.064277 kubelet[2736]: I0130 14:19:13.063879 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zzmzg" podStartSLOduration=33.063860055 podStartE2EDuration="33.063860055s" podCreationTimestamp="2025-01-30 14:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:19:13.030038954 +0000 UTC m=+39.403307340" watchObservedRunningTime="2025-01-30 14:19:13.063860055 +0000 UTC m=+39.437128401" Jan 30 14:19:13.064277 kubelet[2736]: I0130 14:19:13.064010 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4sg8x" podStartSLOduration=33.064003695 podStartE2EDuration="33.064003695s" podCreationTimestamp="2025-01-30 14:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:19:13.063669496 +0000 UTC m=+39.436937842" watchObservedRunningTime="2025-01-30 14:19:13.064003695 +0000 UTC m=+39.437272041" Jan 30 14:19:13.241545 systemd-networkd[1379]: calif20ab475efc: Link UP Jan 30 14:19:13.242723 systemd-networkd[1379]: calif20ab475efc: Gained carrier Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:12.984 [INFO][4824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0 csi-node-driver- calico-system 36f3a4c2-f842-4550-b82d-bc5e5af52ab2 800 0 2025-01-30 14:18:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-2-dd601a010b csi-node-driver-7cqhl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif20ab475efc [] []}} ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:12.984 [INFO][4824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.065 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" HandleID="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.186 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" HandleID="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024f000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-dd601a010b", "pod":"csi-node-driver-7cqhl", "timestamp":"2025-01-30 14:19:13.064646532 +0000 UTC"}, Hostname:"ci-4081-3-0-2-dd601a010b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.186 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.186 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.186 [INFO][4834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-dd601a010b' Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.191 [INFO][4834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.200 [INFO][4834] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.207 [INFO][4834] ipam/ipam.go 489: Trying affinity for 192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.210 [INFO][4834] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.214 [INFO][4834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.214 [INFO][4834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.217 [INFO][4834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805 Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.224 [INFO][4834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.235 [INFO][4834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.70/26] block=192.168.40.64/26 handle="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.235 [INFO][4834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.70/26] handle="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" host="ci-4081-3-0-2-dd601a010b" Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.235 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:13.271554 containerd[1487]: 2025-01-30 14:19:13.235 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.70/26] IPv6=[] ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" HandleID="k8s-pod-network.132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.237 [INFO][4824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36f3a4c2-f842-4550-b82d-bc5e5af52ab2", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"", Pod:"csi-node-driver-7cqhl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif20ab475efc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.237 [INFO][4824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.70/32] ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.237 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif20ab475efc ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.244 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.246 [INFO][4824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36f3a4c2-f842-4550-b82d-bc5e5af52ab2", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805", Pod:"csi-node-driver-7cqhl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif20ab475efc", MAC:"9a:2d:f8:52:91:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:13.272888 containerd[1487]: 2025-01-30 14:19:13.266 [INFO][4824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805" Namespace="calico-system" Pod="csi-node-driver-7cqhl" WorkloadEndpoint="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:13.297888 containerd[1487]: time="2025-01-30T14:19:13.297541978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:13.297888 containerd[1487]: time="2025-01-30T14:19:13.297615098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:13.297888 containerd[1487]: time="2025-01-30T14:19:13.297630898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:13.297888 containerd[1487]: time="2025-01-30T14:19:13.297732178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:13.329560 systemd[1]: Started cri-containerd-132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805.scope - libcontainer container 132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805. Jan 30 14:19:13.363682 containerd[1487]: time="2025-01-30T14:19:13.363639308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqhl,Uid:36f3a4c2-f842-4550-b82d-bc5e5af52ab2,Namespace:calico-system,Attempt:1,} returns sandbox id \"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805\"" Jan 30 14:19:14.102612 systemd-networkd[1379]: cali79f81038a4e: Gained IPv6LL Jan 30 14:19:14.295573 systemd-networkd[1379]: caliaea34b63e41: Gained IPv6LL Jan 30 14:19:14.361525 systemd-networkd[1379]: calif20ab475efc: Gained IPv6LL Jan 30 14:19:14.422954 systemd-networkd[1379]: cali7949b5887a1: Gained IPv6LL Jan 30 14:19:14.750304 containerd[1487]: time="2025-01-30T14:19:14.749374644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:14.751316 containerd[1487]: time="2025-01-30T14:19:14.751247837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:19:14.751912 containerd[1487]: time="2025-01-30T14:19:14.751650595Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:14.754908 containerd[1487]: time="2025-01-30T14:19:14.754867742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:14.755573 containerd[1487]: time="2025-01-30T14:19:14.755527499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.513064683s" Jan 30 14:19:14.755573 containerd[1487]: time="2025-01-30T14:19:14.755568339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:19:14.766663 containerd[1487]: time="2025-01-30T14:19:14.765827058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:19:14.778824 containerd[1487]: time="2025-01-30T14:19:14.778779006Z" level=info msg="CreateContainer within sandbox \"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:19:14.800523 containerd[1487]: time="2025-01-30T14:19:14.800437278Z" level=info msg="CreateContainer within sandbox \"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"babfd9009b47c2a4c04f0123ac590fb7c07412238d809f5f4f2c9072d30d057b\"" Jan 30 14:19:14.806427 containerd[1487]: time="2025-01-30T14:19:14.806368255Z" level=info msg="StartContainer for \"babfd9009b47c2a4c04f0123ac590fb7c07412238d809f5f4f2c9072d30d057b\"" Jan 30 14:19:14.843655 systemd[1]: Started cri-containerd-babfd9009b47c2a4c04f0123ac590fb7c07412238d809f5f4f2c9072d30d057b.scope - libcontainer container babfd9009b47c2a4c04f0123ac590fb7c07412238d809f5f4f2c9072d30d057b. Jan 30 14:19:14.900580 containerd[1487]: time="2025-01-30T14:19:14.900132397Z" level=info msg="StartContainer for \"babfd9009b47c2a4c04f0123ac590fb7c07412238d809f5f4f2c9072d30d057b\" returns successfully" Jan 30 14:19:15.066698 kubelet[2736]: I0130 14:19:15.066618 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hv2l4" podStartSLOduration=23.542436853 podStartE2EDuration="28.066601091s" podCreationTimestamp="2025-01-30 14:18:47 +0000 UTC" firstStartedPulling="2025-01-30 14:19:10.241172862 +0000 UTC m=+36.614441208" lastFinishedPulling="2025-01-30 14:19:14.76533706 +0000 UTC m=+41.138605446" observedRunningTime="2025-01-30 14:19:15.064843258 +0000 UTC m=+41.438111684" watchObservedRunningTime="2025-01-30 14:19:15.066601091 +0000 UTC m=+41.439869437" Jan 30 14:19:16.035799 kubelet[2736]: I0130 14:19:16.035334 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:19:17.173334 containerd[1487]: time="2025-01-30T14:19:17.173262065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:17.175756 containerd[1487]: time="2025-01-30T14:19:17.175706935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:19:17.178148 containerd[1487]: time="2025-01-30T14:19:17.177025410Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:17.180965 containerd[1487]: time="2025-01-30T14:19:17.180572757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:17.181637 containerd[1487]: time="2025-01-30T14:19:17.181596113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.414923979s" Jan 30 14:19:17.181772 containerd[1487]: time="2025-01-30T14:19:17.181753032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:19:17.184470 containerd[1487]: time="2025-01-30T14:19:17.184418222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:19:17.213132 containerd[1487]: time="2025-01-30T14:19:17.212950272Z" level=info msg="CreateContainer within sandbox \"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:19:17.233221 containerd[1487]: time="2025-01-30T14:19:17.233135155Z" level=info msg="CreateContainer within sandbox \"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb\"" Jan 30 14:19:17.235228 containerd[1487]: time="2025-01-30T14:19:17.235163587Z" level=info msg="StartContainer for \"04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb\"" Jan 30 14:19:17.289651 systemd[1]: Started cri-containerd-04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb.scope - libcontainer container 04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb. Jan 30 14:19:17.332622 containerd[1487]: time="2025-01-30T14:19:17.332527533Z" level=info msg="StartContainer for \"04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb\" returns successfully" Jan 30 14:19:17.567511 containerd[1487]: time="2025-01-30T14:19:17.567444391Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:17.570349 containerd[1487]: time="2025-01-30T14:19:17.570293940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:19:17.572526 containerd[1487]: time="2025-01-30T14:19:17.572271052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 387.80103ms" Jan 30 14:19:17.572526 containerd[1487]: time="2025-01-30T14:19:17.572335732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:19:17.574169 containerd[1487]: time="2025-01-30T14:19:17.574144325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:19:17.574934 containerd[1487]: time="2025-01-30T14:19:17.574843803Z" level=info msg="CreateContainer within sandbox \"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:19:17.595677 containerd[1487]: time="2025-01-30T14:19:17.595421004Z" level=info msg="CreateContainer within sandbox \"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"563519f1939d9c4c2d01a3a149a562a84a1411a4c8f32228d4e39391d90e439c\"" Jan 30 14:19:17.597012 containerd[1487]: time="2025-01-30T14:19:17.596917518Z" level=info msg="StartContainer for \"563519f1939d9c4c2d01a3a149a562a84a1411a4c8f32228d4e39391d90e439c\"" Jan 30 14:19:17.637435 systemd[1]: Started cri-containerd-563519f1939d9c4c2d01a3a149a562a84a1411a4c8f32228d4e39391d90e439c.scope - libcontainer container 563519f1939d9c4c2d01a3a149a562a84a1411a4c8f32228d4e39391d90e439c. Jan 30 14:19:17.700913 containerd[1487]: time="2025-01-30T14:19:17.700864039Z" level=info msg="StartContainer for \"563519f1939d9c4c2d01a3a149a562a84a1411a4c8f32228d4e39391d90e439c\" returns successfully" Jan 30 14:19:18.066027 kubelet[2736]: I0130 14:19:18.064688 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c9c95dfbf-6n4hn" podStartSLOduration=24.049521341 podStartE2EDuration="30.064670445s" podCreationTimestamp="2025-01-30 14:18:48 +0000 UTC" firstStartedPulling="2025-01-30 14:19:11.167832283 +0000 UTC m=+37.541100629" lastFinishedPulling="2025-01-30 14:19:17.182981387 +0000 UTC m=+43.556249733" observedRunningTime="2025-01-30 14:19:18.063927128 +0000 UTC m=+44.437195514" watchObservedRunningTime="2025-01-30 14:19:18.064670445 +0000 UTC m=+44.437938791" Jan 30 14:19:18.143219 kubelet[2736]: I0130 14:19:18.143104 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddc4464bf-hldb9" podStartSLOduration=26.13684562 podStartE2EDuration="31.143085389s" podCreationTimestamp="2025-01-30 14:18:47 +0000 UTC" firstStartedPulling="2025-01-30 14:19:12.567174639 +0000 UTC m=+38.940442945" lastFinishedPulling="2025-01-30 14:19:17.573414368 +0000 UTC m=+43.946682714" observedRunningTime="2025-01-30 14:19:18.085122568 +0000 UTC m=+44.458391034" watchObservedRunningTime="2025-01-30 14:19:18.143085389 +0000 UTC m=+44.516353735" Jan 30 14:19:18.966005 containerd[1487]: time="2025-01-30T14:19:18.965951437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:18.966941 containerd[1487]: time="2025-01-30T14:19:18.966826833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:19:18.968104 containerd[1487]: time="2025-01-30T14:19:18.967609950Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:18.970953 containerd[1487]: time="2025-01-30T14:19:18.970912778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:18.971612 containerd[1487]: time="2025-01-30T14:19:18.971571615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.39727321s" Jan 30 14:19:18.971612 containerd[1487]: time="2025-01-30T14:19:18.971608295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:19:18.976161 containerd[1487]: time="2025-01-30T14:19:18.975318601Z" level=info msg="CreateContainer within sandbox \"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:19:18.992987 containerd[1487]: time="2025-01-30T14:19:18.992938495Z" level=info msg="CreateContainer within sandbox \"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7fa5e4750b65d7d7a414b179ed39289665f803455cb7d59f7f2e9347054d86fe\"" Jan 30 14:19:18.996462 containerd[1487]: time="2025-01-30T14:19:18.996296762Z" level=info msg="StartContainer for \"7fa5e4750b65d7d7a414b179ed39289665f803455cb7d59f7f2e9347054d86fe\"" Jan 30 14:19:19.054427 systemd[1]: Started cri-containerd-7fa5e4750b65d7d7a414b179ed39289665f803455cb7d59f7f2e9347054d86fe.scope - libcontainer container 7fa5e4750b65d7d7a414b179ed39289665f803455cb7d59f7f2e9347054d86fe. Jan 30 14:19:19.059238 kubelet[2736]: I0130 14:19:19.059176 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:19:19.095932 containerd[1487]: time="2025-01-30T14:19:19.095866351Z" level=info msg="StartContainer for \"7fa5e4750b65d7d7a414b179ed39289665f803455cb7d59f7f2e9347054d86fe\" returns successfully" Jan 30 14:19:19.097799 containerd[1487]: time="2025-01-30T14:19:19.097675344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:19:21.479071 containerd[1487]: time="2025-01-30T14:19:21.478044183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:21.479813 containerd[1487]: time="2025-01-30T14:19:21.479777777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:19:21.481346 containerd[1487]: time="2025-01-30T14:19:21.481272892Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:21.486055 containerd[1487]: time="2025-01-30T14:19:21.485991555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:19:21.487113 containerd[1487]: time="2025-01-30T14:19:21.486950751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.389228167s" Jan 30 14:19:21.487113 containerd[1487]: time="2025-01-30T14:19:21.486991991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:19:21.492809 containerd[1487]: time="2025-01-30T14:19:21.492729850Z" level=info msg="CreateContainer within sandbox \"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:19:21.517963 containerd[1487]: time="2025-01-30T14:19:21.517595840Z" level=info msg="CreateContainer within sandbox \"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2f9c3af52ff7b5daf512886a37c9426768d0f35c386615e6d6c6b3ed9e81a213\"" Jan 30 14:19:21.519397 containerd[1487]: time="2025-01-30T14:19:21.518901156Z" level=info msg="StartContainer for \"2f9c3af52ff7b5daf512886a37c9426768d0f35c386615e6d6c6b3ed9e81a213\"" Jan 30 14:19:21.570588 systemd[1]: Started cri-containerd-2f9c3af52ff7b5daf512886a37c9426768d0f35c386615e6d6c6b3ed9e81a213.scope - libcontainer container 2f9c3af52ff7b5daf512886a37c9426768d0f35c386615e6d6c6b3ed9e81a213. Jan 30 14:19:21.603250 containerd[1487]: time="2025-01-30T14:19:21.602833852Z" level=info msg="StartContainer for \"2f9c3af52ff7b5daf512886a37c9426768d0f35c386615e6d6c6b3ed9e81a213\" returns successfully" Jan 30 14:19:21.870466 kubelet[2736]: I0130 14:19:21.870414 2736 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:19:21.875989 kubelet[2736]: I0130 14:19:21.875908 2736 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:19:28.344703 kubelet[2736]: I0130 14:19:28.344572 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7cqhl" podStartSLOduration=32.221209445 podStartE2EDuration="40.344551849s" podCreationTimestamp="2025-01-30 14:18:48 +0000 UTC" firstStartedPulling="2025-01-30 14:19:13.36558146 +0000 UTC m=+39.738849766" lastFinishedPulling="2025-01-30 14:19:21.488923744 +0000 UTC m=+47.862192170" observedRunningTime="2025-01-30 14:19:22.105579319 +0000 UTC m=+48.478847745" watchObservedRunningTime="2025-01-30 14:19:28.344551849 +0000 UTC m=+54.717820315" Jan 30 14:19:33.794546 containerd[1487]: time="2025-01-30T14:19:33.794481005Z" level=info msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.864 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bd268bd3-a239-4856-881d-2df8012160ba", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17", Pod:"coredns-6f6b679f8f-zzmzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f81038a4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.865 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.865 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" iface="eth0" netns="" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.865 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.865 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.906 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.907 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.907 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.927 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.927 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.931 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:33.936644 containerd[1487]: 2025-01-30 14:19:33.934 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:33.937436 containerd[1487]: time="2025-01-30T14:19:33.937367242Z" level=info msg="TearDown network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" successfully" Jan 30 14:19:33.937546 containerd[1487]: time="2025-01-30T14:19:33.937528162Z" level=info msg="StopPodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" returns successfully" Jan 30 14:19:33.938845 containerd[1487]: time="2025-01-30T14:19:33.938816398Z" level=info msg="RemovePodSandbox for \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" Jan 30 14:19:33.953169 containerd[1487]: time="2025-01-30T14:19:33.953113194Z" level=info msg="Forcibly stopping sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\"" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.016 [WARNING][5223] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bd268bd3-a239-4856-881d-2df8012160ba", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"2f210e696a109fea5df12ae1f7b87bdb25ebee7206257a40af745a55d3306b17", Pod:"coredns-6f6b679f8f-zzmzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f81038a4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.016 [INFO][5223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.016 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" iface="eth0" netns="" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.016 [INFO][5223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.016 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.067 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.067 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.067 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.083 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.083 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" HandleID="k8s-pod-network.9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--zzmzg-eth0" Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.087 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.090942 containerd[1487]: 2025-01-30 14:19:34.088 [INFO][5223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580" Jan 30 14:19:34.090942 containerd[1487]: time="2025-01-30T14:19:34.090828971Z" level=info msg="TearDown network for sandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" successfully" Jan 30 14:19:34.096482 containerd[1487]: time="2025-01-30T14:19:34.096329234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:34.096482 containerd[1487]: time="2025-01-30T14:19:34.096430274Z" level=info msg="RemovePodSandbox \"9ad94df734eb864ee5e083d14cc2509f15b83694cf1d6f854cc0bb77ff454580\" returns successfully" Jan 30 14:19:34.097435 containerd[1487]: time="2025-01-30T14:19:34.097405711Z" level=info msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.173 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36f3a4c2-f842-4550-b82d-bc5e5af52ab2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805", Pod:"csi-node-driver-7cqhl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif20ab475efc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.174 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.174 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" iface="eth0" netns="" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.174 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.174 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.198 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.198 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.198 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.211 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.211 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.213 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.217309 containerd[1487]: 2025-01-30 14:19:34.214 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.217309 containerd[1487]: time="2025-01-30T14:19:34.217116825Z" level=info msg="TearDown network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" successfully" Jan 30 14:19:34.217309 containerd[1487]: time="2025-01-30T14:19:34.217141864Z" level=info msg="StopPodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" returns successfully" Jan 30 14:19:34.219179 containerd[1487]: time="2025-01-30T14:19:34.218817019Z" level=info msg="RemovePodSandbox for \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" Jan 30 14:19:34.219179 containerd[1487]: time="2025-01-30T14:19:34.218857299Z" level=info msg="Forcibly stopping sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\"" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.277 [WARNING][5273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36f3a4c2-f842-4550-b82d-bc5e5af52ab2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"132e59fc96e930654326b0818c40f8929dbe6ed93dc65950d46df9929496b805", Pod:"csi-node-driver-7cqhl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif20ab475efc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.277 [INFO][5273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.277 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" iface="eth0" netns="" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.277 [INFO][5273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.277 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.303 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.303 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.303 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.318 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.319 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" HandleID="k8s-pod-network.efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Workload="ci--4081--3--0--2--dd601a010b-k8s-csi--node--driver--7cqhl-eth0" Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.323 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.326773 containerd[1487]: 2025-01-30 14:19:34.325 [INFO][5273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e" Jan 30 14:19:34.327347 containerd[1487]: time="2025-01-30T14:19:34.326814449Z" level=info msg="TearDown network for sandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" successfully" Jan 30 14:19:34.336859 containerd[1487]: time="2025-01-30T14:19:34.336796019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:34.336859 containerd[1487]: time="2025-01-30T14:19:34.336872738Z" level=info msg="RemovePodSandbox \"efce3097120428f719a18d44493792b3c3a3894134c29586cd0b69148018656e\" returns successfully" Jan 30 14:19:34.337802 containerd[1487]: time="2025-01-30T14:19:34.337760416Z" level=info msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.407 [WARNING][5299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6140804b-3226-47d5-b2f9-051be91afcb7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0", Pod:"calico-apiserver-5ddc4464bf-hv2l4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf5f1a158c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.407 [INFO][5299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.407 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" iface="eth0" netns="" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.407 [INFO][5299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.407 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.433 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.433 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.433 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.445 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.445 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.449 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.455644 containerd[1487]: 2025-01-30 14:19:34.452 [INFO][5299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.455644 containerd[1487]: time="2025-01-30T14:19:34.455549655Z" level=info msg="TearDown network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" successfully" Jan 30 14:19:34.455644 containerd[1487]: time="2025-01-30T14:19:34.455585055Z" level=info msg="StopPodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" returns successfully" Jan 30 14:19:34.458408 containerd[1487]: time="2025-01-30T14:19:34.456373573Z" level=info msg="RemovePodSandbox for \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" Jan 30 14:19:34.458408 containerd[1487]: time="2025-01-30T14:19:34.456429933Z" level=info msg="Forcibly stopping sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\"" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.505 [WARNING][5324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6140804b-3226-47d5-b2f9-051be91afcb7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"ed4ae374111cc1c49843d960b26f715c7ad6da6908e6310530bc3340c70b3ac0", Pod:"calico-apiserver-5ddc4464bf-hv2l4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf5f1a158c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.505 [INFO][5324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.505 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" iface="eth0" netns="" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.505 [INFO][5324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.506 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.533 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.533 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.533 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.543 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.543 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" HandleID="k8s-pod-network.f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hv2l4-eth0" Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.545 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.549571 containerd[1487]: 2025-01-30 14:19:34.548 [INFO][5324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff" Jan 30 14:19:34.549995 containerd[1487]: time="2025-01-30T14:19:34.549635688Z" level=info msg="TearDown network for sandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" successfully" Jan 30 14:19:34.555252 containerd[1487]: time="2025-01-30T14:19:34.555015791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:34.555252 containerd[1487]: time="2025-01-30T14:19:34.555128791Z" level=info msg="RemovePodSandbox \"f0e86d66b17efe930a160e9bfa39fc28dab56dc9073983a617edff8fbc9611ff\" returns successfully" Jan 30 14:19:34.556226 containerd[1487]: time="2025-01-30T14:19:34.556123108Z" level=info msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.615 [WARNING][5348] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e8c173ab-1e44-495a-b07f-a4b7866f3d63", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd", Pod:"coredns-6f6b679f8f-4sg8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7949b5887a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.616 [INFO][5348] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.616 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" iface="eth0" netns="" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.616 [INFO][5348] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.616 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.648 [INFO][5354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.648 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.648 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.663 [WARNING][5354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.663 [INFO][5354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.666 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.670175 containerd[1487]: 2025-01-30 14:19:34.668 [INFO][5348] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.670175 containerd[1487]: time="2025-01-30T14:19:34.670017879Z" level=info msg="TearDown network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" successfully" Jan 30 14:19:34.670175 containerd[1487]: time="2025-01-30T14:19:34.670046999Z" level=info msg="StopPodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" returns successfully" Jan 30 14:19:34.671569 containerd[1487]: time="2025-01-30T14:19:34.671021396Z" level=info msg="RemovePodSandbox for \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" Jan 30 14:19:34.671569 containerd[1487]: time="2025-01-30T14:19:34.671069236Z" level=info msg="Forcibly stopping sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\"" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.732 [WARNING][5372] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e8c173ab-1e44-495a-b07f-a4b7866f3d63", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"1ce15bc86d8725f6741b7859f2dee5ccc7a84acfffb6f3e4c2277a6930f9f0dd", Pod:"coredns-6f6b679f8f-4sg8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7949b5887a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.733 [INFO][5372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.734 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" iface="eth0" netns="" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.734 [INFO][5372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.734 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.763 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.763 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.764 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.775 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.775 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" HandleID="k8s-pod-network.bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Workload="ci--4081--3--0--2--dd601a010b-k8s-coredns--6f6b679f8f--4sg8x-eth0" Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.779 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.783727 containerd[1487]: 2025-01-30 14:19:34.781 [INFO][5372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245" Jan 30 14:19:34.784279 containerd[1487]: time="2025-01-30T14:19:34.783721332Z" level=info msg="TearDown network for sandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" successfully" Jan 30 14:19:34.792578 containerd[1487]: time="2025-01-30T14:19:34.792505945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:34.792987 containerd[1487]: time="2025-01-30T14:19:34.792591864Z" level=info msg="RemovePodSandbox \"bfad1400ee793b286b77776b8b62c983c9cdb0239a9397a161e3c1fe7db35245\" returns successfully" Jan 30 14:19:34.793265 containerd[1487]: time="2025-01-30T14:19:34.793070303Z" level=info msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.856 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0", GenerateName:"calico-kube-controllers-5c9c95dfbf-", Namespace:"calico-system", SelfLink:"", UID:"54e7f95c-309e-483a-bdae-b233ec02fb58", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c9c95dfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3", Pod:"calico-kube-controllers-5c9c95dfbf-6n4hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1b5c6d1991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.856 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.856 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" iface="eth0" netns="" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.856 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.856 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.900 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.901 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.901 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.911 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.911 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.913 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:34.920789 containerd[1487]: 2025-01-30 14:19:34.917 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:34.920789 containerd[1487]: time="2025-01-30T14:19:34.920653033Z" level=info msg="TearDown network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" successfully" Jan 30 14:19:34.920789 containerd[1487]: time="2025-01-30T14:19:34.920680393Z" level=info msg="StopPodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" returns successfully" Jan 30 14:19:34.923071 containerd[1487]: time="2025-01-30T14:19:34.922475987Z" level=info msg="RemovePodSandbox for \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" Jan 30 14:19:34.923071 containerd[1487]: time="2025-01-30T14:19:34.922671227Z" level=info msg="Forcibly stopping sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\"" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:34.981 [WARNING][5421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0", GenerateName:"calico-kube-controllers-5c9c95dfbf-", Namespace:"calico-system", SelfLink:"", UID:"54e7f95c-309e-483a-bdae-b233ec02fb58", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c9c95dfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"6bf7817addc636d25368810e185c2bb53126ecfd7019e26f19b3ec1515ce27c3", Pod:"calico-kube-controllers-5c9c95dfbf-6n4hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1b5c6d1991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:34.981 [INFO][5421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:34.981 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" iface="eth0" netns="" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:34.981 [INFO][5421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:34.981 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.009 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.009 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.009 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.023 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.023 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" HandleID="k8s-pod-network.04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--kube--controllers--5c9c95dfbf--6n4hn-eth0" Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.027 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:35.034071 containerd[1487]: 2025-01-30 14:19:35.031 [INFO][5421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4" Jan 30 14:19:35.036070 containerd[1487]: time="2025-01-30T14:19:35.035991241Z" level=info msg="TearDown network for sandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" successfully" Jan 30 14:19:35.041088 containerd[1487]: time="2025-01-30T14:19:35.041035026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:35.041388 containerd[1487]: time="2025-01-30T14:19:35.041121146Z" level=info msg="RemovePodSandbox \"04d1747174335448dc2a9dfe1577186d9086573cb1c547893c3c0dee855c1da4\" returns successfully" Jan 30 14:19:35.042316 containerd[1487]: time="2025-01-30T14:19:35.041825623Z" level=info msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.111 [WARNING][5446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0517adcd-3cf1-4360-be3e-43009d876448", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b", Pod:"calico-apiserver-5ddc4464bf-hldb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea34b63e41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.111 [INFO][5446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.112 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" iface="eth0" netns="" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.112 [INFO][5446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.112 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.167 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.167 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.168 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.179 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.179 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.183 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:35.187674 containerd[1487]: 2025-01-30 14:19:35.185 [INFO][5446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.189388 containerd[1487]: time="2025-01-30T14:19:35.187713502Z" level=info msg="TearDown network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" successfully" Jan 30 14:19:35.189388 containerd[1487]: time="2025-01-30T14:19:35.187741662Z" level=info msg="StopPodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" returns successfully" Jan 30 14:19:35.189388 containerd[1487]: time="2025-01-30T14:19:35.188312780Z" level=info msg="RemovePodSandbox for \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" Jan 30 14:19:35.189388 containerd[1487]: time="2025-01-30T14:19:35.188347060Z" level=info msg="Forcibly stopping sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\"" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.260 [WARNING][5471] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0", GenerateName:"calico-apiserver-5ddc4464bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0517adcd-3cf1-4360-be3e-43009d876448", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddc4464bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-dd601a010b", ContainerID:"c45203899acde88f3184d952a0578bb6eb7b714c3d933bb4ccd74cb3ae365c8b", Pod:"calico-apiserver-5ddc4464bf-hldb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea34b63e41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.261 [INFO][5471] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.261 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" iface="eth0" netns="" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.261 [INFO][5471] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.261 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.299 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.299 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.299 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.309 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.309 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" HandleID="k8s-pod-network.f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Workload="ci--4081--3--0--2--dd601a010b-k8s-calico--apiserver--5ddc4464bf--hldb9-eth0" Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.311 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:19:35.316904 containerd[1487]: 2025-01-30 14:19:35.313 [INFO][5471] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6" Jan 30 14:19:35.319037 containerd[1487]: time="2025-01-30T14:19:35.316949511Z" level=info msg="TearDown network for sandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" successfully" Jan 30 14:19:35.322674 containerd[1487]: time="2025-01-30T14:19:35.322581094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:35.322674 containerd[1487]: time="2025-01-30T14:19:35.322661454Z" level=info msg="RemovePodSandbox \"f27b9b43fffee0d095f7303147c7e31aa1c997bd2e4f4ad7b2607a253687b8d6\" returns successfully" Jan 30 14:19:39.185348 systemd[1]: run-containerd-runc-k8s.io-04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb-runc.mV52ff.mount: Deactivated successfully. Jan 30 14:19:41.843377 kubelet[2736]: I0130 14:19:41.842753 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:19:50.082559 kubelet[2736]: I0130 14:19:50.082324 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:19:53.098591 systemd[1]: Started sshd@18-157.90.246.176:22-162.240.226.19:37328.service - OpenSSH per-connection server daemon (162.240.226.19:37328). Jan 30 14:19:53.128548 systemd[1]: Started sshd@19-157.90.246.176:22-178.128.232.125:50136.service - OpenSSH per-connection server daemon (178.128.232.125:50136). Jan 30 14:19:53.754985 sshd[5518]: Invalid user marian from 178.128.232.125 port 50136 Jan 30 14:19:53.867759 sshd[5518]: Received disconnect from 178.128.232.125 port 50136:11: Bye Bye [preauth] Jan 30 14:19:53.869284 sshd[5518]: Disconnected from invalid user marian 178.128.232.125 port 50136 [preauth] Jan 30 14:19:53.870692 systemd[1]: sshd@19-157.90.246.176:22-178.128.232.125:50136.service: Deactivated successfully. Jan 30 14:19:53.980918 sshd[5516]: Invalid user shaseng from 162.240.226.19 port 37328 Jan 30 14:19:54.145278 sshd[5516]: Received disconnect from 162.240.226.19 port 37328:11: Bye Bye [preauth] Jan 30 14:19:54.145278 sshd[5516]: Disconnected from invalid user shaseng 162.240.226.19 port 37328 [preauth] Jan 30 14:19:54.147911 systemd[1]: sshd@18-157.90.246.176:22-162.240.226.19:37328.service: Deactivated successfully. Jan 30 14:20:06.313496 systemd[1]: Started sshd@20-157.90.246.176:22-185.146.232.60:42848.service - OpenSSH per-connection server daemon (185.146.232.60:42848). Jan 30 14:20:06.538966 sshd[5551]: Invalid user agung from 185.146.232.60 port 42848 Jan 30 14:20:06.572409 sshd[5551]: Received disconnect from 185.146.232.60 port 42848:11: Bye Bye [preauth] Jan 30 14:20:06.572409 sshd[5551]: Disconnected from invalid user agung 185.146.232.60 port 42848 [preauth] Jan 30 14:20:06.576882 systemd[1]: sshd@20-157.90.246.176:22-185.146.232.60:42848.service: Deactivated successfully. Jan 30 14:20:20.493959 systemd[1]: sshd@14-157.90.246.176:22-117.50.209.157:50770.service: Deactivated successfully. Jan 30 14:20:21.689255 systemd[1]: run-containerd-runc-k8s.io-04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb-runc.WHArwx.mount: Deactivated successfully. Jan 30 14:20:28.276555 systemd[1]: run-containerd-runc-k8s.io-34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e-runc.f0aqz9.mount: Deactivated successfully. Jan 30 14:20:35.333094 systemd[1]: Started sshd@21-157.90.246.176:22-36.110.228.254:61756.service - OpenSSH per-connection server daemon (36.110.228.254:61756). Jan 30 14:20:39.201756 systemd[1]: run-containerd-runc-k8s.io-04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb-runc.dtu3vx.mount: Deactivated successfully. Jan 30 14:20:59.968807 systemd[1]: Started sshd@22-157.90.246.176:22-178.128.232.125:48816.service - OpenSSH per-connection server daemon (178.128.232.125:48816). Jan 30 14:21:00.494553 systemd[1]: Started sshd@23-157.90.246.176:22-162.240.226.19:35510.service - OpenSSH per-connection server daemon (162.240.226.19:35510). Jan 30 14:21:00.574123 sshd[5692]: Invalid user admin from 178.128.232.125 port 48816 Jan 30 14:21:00.683383 sshd[5692]: Received disconnect from 178.128.232.125 port 48816:11: Bye Bye [preauth] Jan 30 14:21:00.683383 sshd[5692]: Disconnected from invalid user admin 178.128.232.125 port 48816 [preauth] Jan 30 14:21:00.700583 systemd[1]: sshd@22-157.90.246.176:22-178.128.232.125:48816.service: Deactivated successfully. Jan 30 14:21:01.412246 sshd[5695]: Invalid user cuckoo from 162.240.226.19 port 35510 Jan 30 14:21:01.578578 sshd[5695]: Received disconnect from 162.240.226.19 port 35510:11: Bye Bye [preauth] Jan 30 14:21:01.578578 sshd[5695]: Disconnected from invalid user cuckoo 162.240.226.19 port 35510 [preauth] Jan 30 14:21:01.581291 systemd[1]: sshd@23-157.90.246.176:22-162.240.226.19:35510.service: Deactivated successfully. Jan 30 14:21:18.198856 systemd[1]: Started sshd@24-157.90.246.176:22-185.146.232.60:39056.service - OpenSSH per-connection server daemon (185.146.232.60:39056). Jan 30 14:21:18.447501 sshd[5728]: Invalid user bmk from 185.146.232.60 port 39056 Jan 30 14:21:18.482473 sshd[5728]: Received disconnect from 185.146.232.60 port 39056:11: Bye Bye [preauth] Jan 30 14:21:18.482473 sshd[5728]: Disconnected from invalid user bmk 185.146.232.60 port 39056 [preauth] Jan 30 14:21:18.485150 systemd[1]: sshd@24-157.90.246.176:22-185.146.232.60:39056.service: Deactivated successfully. Jan 30 14:21:28.287064 systemd[1]: run-containerd-runc-k8s.io-34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e-runc.a8P7WN.mount: Deactivated successfully. Jan 30 14:21:48.672677 systemd[1]: Started sshd@25-157.90.246.176:22-185.208.156.152:43076.service - OpenSSH per-connection server daemon (185.208.156.152:43076). Jan 30 14:21:52.553231 sshd[5800]: Invalid user amy from 185.208.156.152 port 43076 Jan 30 14:21:52.976048 sshd[5808]: pam_faillock(sshd:auth): User unknown Jan 30 14:21:52.981897 sshd[5800]: Postponed keyboard-interactive for invalid user amy from 185.208.156.152 port 43076 ssh2 [preauth] Jan 30 14:21:53.182471 sshd[5808]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:21:53.182510 sshd[5808]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:21:53.183321 sshd[5808]: pam_faillock(sshd:auth): User unknown Jan 30 14:21:55.059350 sshd[5800]: PAM: Permission denied for illegal user amy from 185.208.156.152 Jan 30 14:21:55.059350 sshd[5800]: Failed keyboard-interactive/pam for invalid user amy from 185.208.156.152 port 43076 ssh2 Jan 30 14:21:55.445614 sshd[5800]: Connection closed by invalid user amy 185.208.156.152 port 43076 [preauth] Jan 30 14:21:55.447783 systemd[1]: sshd@25-157.90.246.176:22-185.208.156.152:43076.service: Deactivated successfully. Jan 30 14:21:56.527917 systemd[1]: Started sshd@26-157.90.246.176:22-185.208.156.152:43080.service - OpenSSH per-connection server daemon (185.208.156.152:43080). Jan 30 14:21:58.165360 sshd[5817]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 user=root Jan 30 14:21:59.865321 sshd[5814]: PAM: Permission denied for root from 185.208.156.152 Jan 30 14:22:05.072287 sshd[5814]: Connection closed by authenticating user root 185.208.156.152 port 43080 [preauth] Jan 30 14:22:05.074609 systemd[1]: sshd@26-157.90.246.176:22-185.208.156.152:43080.service: Deactivated successfully. Jan 30 14:22:06.147729 systemd[1]: Started sshd@27-157.90.246.176:22-178.128.232.125:47484.service - OpenSSH per-connection server daemon (178.128.232.125:47484). Jan 30 14:22:06.774619 sshd[5844]: Invalid user bsc from 178.128.232.125 port 47484 Jan 30 14:22:06.881706 sshd[5844]: Received disconnect from 178.128.232.125 port 47484:11: Bye Bye [preauth] Jan 30 14:22:06.881706 sshd[5844]: Disconnected from invalid user bsc 178.128.232.125 port 47484 [preauth] Jan 30 14:22:06.884384 systemd[1]: sshd@27-157.90.246.176:22-178.128.232.125:47484.service: Deactivated successfully. Jan 30 14:22:06.916689 systemd[1]: Started sshd@28-157.90.246.176:22-162.240.226.19:33686.service - OpenSSH per-connection server daemon (162.240.226.19:33686). Jan 30 14:22:07.747662 systemd[1]: Started sshd@29-157.90.246.176:22-185.208.156.152:38666.service - OpenSSH per-connection server daemon (185.208.156.152:38666). Jan 30 14:22:07.827132 sshd[5849]: Invalid user partner from 162.240.226.19 port 33686 Jan 30 14:22:07.993035 sshd[5849]: Received disconnect from 162.240.226.19 port 33686:11: Bye Bye [preauth] Jan 30 14:22:07.993035 sshd[5849]: Disconnected from invalid user partner 162.240.226.19 port 33686 [preauth] Jan 30 14:22:07.994647 systemd[1]: sshd@28-157.90.246.176:22-162.240.226.19:33686.service: Deactivated successfully. Jan 30 14:22:08.029591 sshd[5852]: Invalid user import from 185.208.156.152 port 38666 Jan 30 14:22:08.033928 sshd[5856]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:08.036209 sshd[5852]: Postponed keyboard-interactive for invalid user import from 185.208.156.152 port 38666 ssh2 [preauth] Jan 30 14:22:08.063012 sshd[5856]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:08.063059 sshd[5856]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:08.063963 sshd[5856]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:09.803361 sshd[5852]: PAM: Permission denied for illegal user import from 185.208.156.152 Jan 30 14:22:09.804147 sshd[5852]: Failed keyboard-interactive/pam for invalid user import from 185.208.156.152 port 38666 ssh2 Jan 30 14:22:09.846346 sshd[5852]: Connection closed by invalid user import 185.208.156.152 port 38666 [preauth] Jan 30 14:22:09.848173 systemd[1]: sshd@29-157.90.246.176:22-185.208.156.152:38666.service: Deactivated successfully. Jan 30 14:22:12.420558 systemd[1]: Started sshd@30-157.90.246.176:22-185.208.156.152:38146.service - OpenSSH per-connection server daemon (185.208.156.152:38146). Jan 30 14:22:12.562129 sshd[5882]: Invalid user admin from 185.208.156.152 port 38146 Jan 30 14:22:12.565915 sshd[5884]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:12.567774 sshd[5882]: Postponed keyboard-interactive for invalid user admin from 185.208.156.152 port 38146 ssh2 [preauth] Jan 30 14:22:12.592685 sshd[5884]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:12.592726 sshd[5884]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:12.593476 sshd[5884]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:14.547575 sshd[5882]: PAM: Permission denied for illegal user admin from 185.208.156.152 Jan 30 14:22:14.548574 sshd[5882]: Failed keyboard-interactive/pam for invalid user admin from 185.208.156.152 port 38146 ssh2 Jan 30 14:22:14.585772 sshd[5882]: Connection closed by invalid user admin 185.208.156.152 port 38146 [preauth] Jan 30 14:22:14.588556 systemd[1]: sshd@30-157.90.246.176:22-185.208.156.152:38146.service: Deactivated successfully. Jan 30 14:22:18.412801 systemd[1]: Started sshd@31-157.90.246.176:22-185.208.156.152:35858.service - OpenSSH per-connection server daemon (185.208.156.152:35858). Jan 30 14:22:18.621584 sshd[5902]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 user=root Jan 30 14:22:21.068239 sshd[5900]: PAM: Permission denied for root from 185.208.156.152 Jan 30 14:22:21.127293 sshd[5900]: Connection closed by authenticating user root 185.208.156.152 port 35858 [preauth] Jan 30 14:22:21.130677 systemd[1]: sshd@31-157.90.246.176:22-185.208.156.152:35858.service: Deactivated successfully. Jan 30 14:22:24.899685 systemd[1]: Started sshd@32-157.90.246.176:22-185.208.156.152:35868.service - OpenSSH per-connection server daemon (185.208.156.152:35868). Jan 30 14:22:25.129432 sshd[5928]: Invalid user 1502 from 185.208.156.152 port 35868 Jan 30 14:22:25.133103 sshd[5930]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:25.134660 sshd[5928]: Postponed keyboard-interactive for invalid user 1502 from 185.208.156.152 port 35868 ssh2 [preauth] Jan 30 14:22:25.345497 sshd[5930]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:25.345558 sshd[5930]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:25.345638 sshd[5930]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:27.085607 sshd[5928]: PAM: Permission denied for illegal user 1502 from 185.208.156.152 Jan 30 14:22:27.086665 sshd[5928]: Failed keyboard-interactive/pam for invalid user 1502 from 185.208.156.152 port 35868 ssh2 Jan 30 14:22:27.339907 sshd[5928]: Connection closed by invalid user 1502 185.208.156.152 port 35868 [preauth] Jan 30 14:22:27.342246 systemd[1]: sshd@32-157.90.246.176:22-185.208.156.152:35868.service: Deactivated successfully. Jan 30 14:22:29.589616 systemd[1]: Started sshd@33-157.90.246.176:22-185.146.232.60:39874.service - OpenSSH per-connection server daemon (185.146.232.60:39874). Jan 30 14:22:29.835711 sshd[5957]: Invalid user luisf from 185.146.232.60 port 39874 Jan 30 14:22:29.865998 sshd[5957]: Received disconnect from 185.146.232.60 port 39874:11: Bye Bye [preauth] Jan 30 14:22:29.865998 sshd[5957]: Disconnected from invalid user luisf 185.146.232.60 port 39874 [preauth] Jan 30 14:22:29.868431 systemd[1]: sshd@33-157.90.246.176:22-185.146.232.60:39874.service: Deactivated successfully. Jan 30 14:22:30.368306 systemd[1]: Started sshd@34-157.90.246.176:22-72.50.95.190:37714.service - OpenSSH per-connection server daemon (72.50.95.190:37714). Jan 30 14:22:31.173594 sshd[5962]: Invalid user udin from 72.50.95.190 port 37714 Jan 30 14:22:31.318759 sshd[5962]: Received disconnect from 72.50.95.190 port 37714:11: Bye Bye [preauth] Jan 30 14:22:31.318759 sshd[5962]: Disconnected from invalid user udin 72.50.95.190 port 37714 [preauth] Jan 30 14:22:31.320976 systemd[1]: sshd@34-157.90.246.176:22-72.50.95.190:37714.service: Deactivated successfully. Jan 30 14:22:32.103148 systemd[1]: Started sshd@35-157.90.246.176:22-185.208.156.152:40400.service - OpenSSH per-connection server daemon (185.208.156.152:40400). Jan 30 14:22:32.292247 sshd[5967]: Invalid user craft from 185.208.156.152 port 40400 Jan 30 14:22:32.296050 sshd[5969]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:32.301021 sshd[5967]: Postponed keyboard-interactive for invalid user craft from 185.208.156.152 port 40400 ssh2 [preauth] Jan 30 14:22:32.379958 sshd[5969]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:32.380000 sshd[5969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:32.381141 sshd[5969]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:34.416401 sshd[5967]: PAM: Permission denied for illegal user craft from 185.208.156.152 Jan 30 14:22:34.417346 sshd[5967]: Failed keyboard-interactive/pam for invalid user craft from 185.208.156.152 port 40400 ssh2 Jan 30 14:22:34.502223 sshd[5967]: Connection closed by invalid user craft 185.208.156.152 port 40400 [preauth] Jan 30 14:22:34.503868 systemd[1]: sshd@35-157.90.246.176:22-185.208.156.152:40400.service: Deactivated successfully. Jan 30 14:22:35.360399 systemd[1]: sshd@21-157.90.246.176:22-36.110.228.254:61756.service: Deactivated successfully. Jan 30 14:22:38.185247 systemd[1]: Started sshd@36-157.90.246.176:22-185.208.156.152:40406.service - OpenSSH per-connection server daemon (185.208.156.152:40406). Jan 30 14:22:38.462348 sshd[5977]: Invalid user admin from 185.208.156.152 port 40406 Jan 30 14:22:38.465918 sshd[5979]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:38.467707 sshd[5977]: Postponed keyboard-interactive for invalid user admin from 185.208.156.152 port 40406 ssh2 [preauth] Jan 30 14:22:38.590778 sshd[5979]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:38.590819 sshd[5979]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:38.591929 sshd[5979]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:40.451605 sshd[5977]: PAM: Permission denied for illegal user admin from 185.208.156.152 Jan 30 14:22:40.452073 sshd[5977]: Failed keyboard-interactive/pam for invalid user admin from 185.208.156.152 port 40406 ssh2 Jan 30 14:22:40.479102 sshd[5977]: Connection closed by invalid user admin 185.208.156.152 port 40406 [preauth] Jan 30 14:22:40.482876 systemd[1]: sshd@36-157.90.246.176:22-185.208.156.152:40406.service: Deactivated successfully. Jan 30 14:22:43.903452 systemd[1]: Started sshd@37-157.90.246.176:22-185.208.156.152:37660.service - OpenSSH per-connection server daemon (185.208.156.152:37660). Jan 30 14:22:44.440340 sshd[6003]: Invalid user supervisor from 185.208.156.152 port 37660 Jan 30 14:22:44.444412 sshd[6005]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:44.445512 sshd[6003]: Postponed keyboard-interactive for invalid user supervisor from 185.208.156.152 port 37660 ssh2 [preauth] Jan 30 14:22:44.509991 sshd[6005]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:22:44.510048 sshd[6005]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 Jan 30 14:22:44.511040 sshd[6005]: pam_faillock(sshd:auth): User unknown Jan 30 14:22:46.525993 sshd[6003]: PAM: Permission denied for illegal user supervisor from 185.208.156.152 Jan 30 14:22:46.527023 sshd[6003]: Failed keyboard-interactive/pam for invalid user supervisor from 185.208.156.152 port 37660 ssh2 Jan 30 14:22:46.636228 sshd[6003]: Connection closed by invalid user supervisor 185.208.156.152 port 37660 [preauth] Jan 30 14:22:46.638879 systemd[1]: sshd@37-157.90.246.176:22-185.208.156.152:37660.service: Deactivated successfully. Jan 30 14:22:52.437817 systemd[1]: Started sshd@38-157.90.246.176:22-185.208.156.152:48094.service - OpenSSH per-connection server daemon (185.208.156.152:48094). Jan 30 14:22:52.617853 sshd[6011]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.208.156.152 user=root Jan 30 14:22:54.398087 sshd[6009]: PAM: Permission denied for root from 185.208.156.152 Jan 30 14:22:54.421333 sshd[6009]: Connection closed by authenticating user root 185.208.156.152 port 48094 [preauth] Jan 30 14:22:54.425344 systemd[1]: sshd@38-157.90.246.176:22-185.208.156.152:48094.service: Deactivated successfully. Jan 30 14:22:58.292681 systemd[1]: run-containerd-runc-k8s.io-34b5d51362309ef6d0d5275c36ef4561c6723f89e6cd882d5ebf9d30b00e053e-runc.lcspoN.mount: Deactivated successfully. Jan 30 14:23:13.611529 systemd[1]: Started sshd@39-157.90.246.176:22-178.128.232.125:46160.service - OpenSSH per-connection server daemon (178.128.232.125:46160). Jan 30 14:23:14.192222 systemd[1]: Started sshd@40-157.90.246.176:22-162.240.226.19:60096.service - OpenSSH per-connection server daemon (162.240.226.19:60096). Jan 30 14:23:14.227869 sshd[6055]: Invalid user elemental from 178.128.232.125 port 46160 Jan 30 14:23:14.338223 sshd[6055]: Received disconnect from 178.128.232.125 port 46160:11: Bye Bye [preauth] Jan 30 14:23:14.338223 sshd[6055]: Disconnected from invalid user elemental 178.128.232.125 port 46160 [preauth] Jan 30 14:23:14.337330 systemd[1]: sshd@39-157.90.246.176:22-178.128.232.125:46160.service: Deactivated successfully. Jan 30 14:23:15.268486 sshd[6058]: Received disconnect from 162.240.226.19 port 60096:11: Bye Bye [preauth] Jan 30 14:23:15.268486 sshd[6058]: Disconnected from authenticating user root 162.240.226.19 port 60096 [preauth] Jan 30 14:23:15.272303 systemd[1]: sshd@40-157.90.246.176:22-162.240.226.19:60096.service: Deactivated successfully. Jan 30 14:23:21.690805 systemd[1]: run-containerd-runc-k8s.io-04cdc4e027b2827149c79347cf7facdbf219c2fe61245663f946869c33ef22bb-runc.DbQs2S.mount: Deactivated successfully. Jan 30 14:23:25.985541 systemd[1]: Started sshd@41-157.90.246.176:22-139.178.68.195:53358.service - OpenSSH per-connection server daemon (139.178.68.195:53358). Jan 30 14:23:26.748193 systemd[1]: Started sshd@42-157.90.246.176:22-157.245.147.26:42324.service - OpenSSH per-connection server daemon (157.245.147.26:42324). Jan 30 14:23:26.975814 sshd[6087]: Accepted publickey for core from 139.178.68.195 port 53358 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:26.979019 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:26.987610 systemd-logind[1459]: New session 8 of user core. Jan 30 14:23:26.994427 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:23:27.780528 sshd[6087]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:27.785314 systemd[1]: sshd@41-157.90.246.176:22-139.178.68.195:53358.service: Deactivated successfully. Jan 30 14:23:27.789737 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:23:27.793489 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:23:27.794814 systemd-logind[1459]: Removed session 8. Jan 30 14:23:27.836180 sshd[6090]: Invalid user runcloud from 157.245.147.26 port 42324 Jan 30 14:23:28.039449 sshd[6090]: Received disconnect from 157.245.147.26 port 42324:11: Bye Bye [preauth] Jan 30 14:23:28.039449 sshd[6090]: Disconnected from invalid user runcloud 157.245.147.26 port 42324 [preauth] Jan 30 14:23:28.043566 systemd[1]: sshd@42-157.90.246.176:22-157.245.147.26:42324.service: Deactivated successfully. Jan 30 14:23:32.951652 systemd[1]: Started sshd@43-157.90.246.176:22-139.178.68.195:53368.service - OpenSSH per-connection server daemon (139.178.68.195:53368). Jan 30 14:23:33.926763 sshd[6129]: Accepted publickey for core from 139.178.68.195 port 53368 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:33.929048 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:33.936781 systemd-logind[1459]: New session 9 of user core. Jan 30 14:23:33.940669 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:23:34.673328 sshd[6129]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:34.678406 systemd[1]: sshd@43-157.90.246.176:22-139.178.68.195:53368.service: Deactivated successfully. Jan 30 14:23:34.681244 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:23:34.682567 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:23:34.683728 systemd-logind[1459]: Removed session 9. Jan 30 14:23:39.855667 systemd[1]: Started sshd@44-157.90.246.176:22-139.178.68.195:40924.service - OpenSSH per-connection server daemon (139.178.68.195:40924). Jan 30 14:23:39.943541 systemd[1]: Started sshd@45-157.90.246.176:22-81.215.228.18:56088.service - OpenSSH per-connection server daemon (81.215.228.18:56088). Jan 30 14:23:40.357935 sshd[6166]: Invalid user adolfo from 81.215.228.18 port 56088 Jan 30 14:23:40.419646 sshd[6166]: Received disconnect from 81.215.228.18 port 56088:11: Bye Bye [preauth] Jan 30 14:23:40.419646 sshd[6166]: Disconnected from invalid user adolfo 81.215.228.18 port 56088 [preauth] Jan 30 14:23:40.425221 systemd[1]: sshd@45-157.90.246.176:22-81.215.228.18:56088.service: Deactivated successfully. Jan 30 14:23:40.843259 sshd[6163]: Accepted publickey for core from 139.178.68.195 port 40924 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:40.844446 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:40.854392 systemd-logind[1459]: New session 10 of user core. Jan 30 14:23:40.865277 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:23:41.601466 sshd[6163]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:41.610480 systemd[1]: sshd@44-157.90.246.176:22-139.178.68.195:40924.service: Deactivated successfully. Jan 30 14:23:41.610828 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:23:41.614410 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:23:41.615770 systemd-logind[1459]: Removed session 10. Jan 30 14:23:41.783741 systemd[1]: Started sshd@46-157.90.246.176:22-139.178.68.195:40936.service - OpenSSH per-connection server daemon (139.178.68.195:40936). Jan 30 14:23:42.350692 systemd[1]: Started sshd@47-157.90.246.176:22-185.146.232.60:48202.service - OpenSSH per-connection server daemon (185.146.232.60:48202). Jan 30 14:23:42.589838 sshd[6186]: Invalid user chenhui from 185.146.232.60 port 48202 Jan 30 14:23:42.621852 sshd[6186]: Received disconnect from 185.146.232.60 port 48202:11: Bye Bye [preauth] Jan 30 14:23:42.621852 sshd[6186]: Disconnected from invalid user chenhui 185.146.232.60 port 48202 [preauth] Jan 30 14:23:42.624578 systemd[1]: sshd@47-157.90.246.176:22-185.146.232.60:48202.service: Deactivated successfully. Jan 30 14:23:42.762625 sshd[6183]: Accepted publickey for core from 139.178.68.195 port 40936 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:42.764966 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:42.772238 systemd-logind[1459]: New session 11 of user core. Jan 30 14:23:42.779610 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:23:43.590902 sshd[6183]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:43.595409 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:23:43.595615 systemd[1]: sshd@46-157.90.246.176:22-139.178.68.195:40936.service: Deactivated successfully. Jan 30 14:23:43.598384 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:23:43.600809 systemd-logind[1459]: Removed session 11. Jan 30 14:23:43.771053 systemd[1]: Started sshd@48-157.90.246.176:22-139.178.68.195:40946.service - OpenSSH per-connection server daemon (139.178.68.195:40946). Jan 30 14:23:44.750422 sshd[6203]: Accepted publickey for core from 139.178.68.195 port 40946 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:44.753787 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:44.763954 systemd-logind[1459]: New session 12 of user core. Jan 30 14:23:44.772507 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:23:45.508164 sshd[6203]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:45.514175 systemd[1]: sshd@48-157.90.246.176:22-139.178.68.195:40946.service: Deactivated successfully. Jan 30 14:23:45.516930 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:23:45.519276 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:23:45.521503 systemd-logind[1459]: Removed session 12. Jan 30 14:23:50.687721 systemd[1]: Started sshd@49-157.90.246.176:22-139.178.68.195:55750.service - OpenSSH per-connection server daemon (139.178.68.195:55750). Jan 30 14:23:51.670897 sshd[6228]: Accepted publickey for core from 139.178.68.195 port 55750 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:51.673190 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:51.679370 systemd-logind[1459]: New session 13 of user core. Jan 30 14:23:51.685521 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:23:52.446048 sshd[6228]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:52.449916 systemd[1]: sshd@49-157.90.246.176:22-139.178.68.195:55750.service: Deactivated successfully. Jan 30 14:23:52.452598 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:23:52.455387 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:23:52.457083 systemd-logind[1459]: Removed session 13. Jan 30 14:23:52.620928 systemd[1]: Started sshd@50-157.90.246.176:22-139.178.68.195:55760.service - OpenSSH per-connection server daemon (139.178.68.195:55760). Jan 30 14:23:53.599782 sshd[6241]: Accepted publickey for core from 139.178.68.195 port 55760 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:53.600829 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:53.607530 systemd-logind[1459]: New session 14 of user core. Jan 30 14:23:53.614521 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:23:54.468031 sshd[6241]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:54.473850 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:23:54.475114 systemd[1]: sshd@50-157.90.246.176:22-139.178.68.195:55760.service: Deactivated successfully. Jan 30 14:23:54.479071 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:23:54.481631 systemd-logind[1459]: Removed session 14. Jan 30 14:23:54.639616 systemd[1]: Started sshd@51-157.90.246.176:22-139.178.68.195:55764.service - OpenSSH per-connection server daemon (139.178.68.195:55764). Jan 30 14:23:55.631868 sshd[6252]: Accepted publickey for core from 139.178.68.195 port 55764 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:55.634669 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:55.641866 systemd-logind[1459]: New session 15 of user core. Jan 30 14:23:55.648494 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:24:05.708913 sshd[6252]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:05.716269 systemd[1]: sshd@51-157.90.246.176:22-139.178.68.195:55764.service: Deactivated successfully. Jan 30 14:24:05.719608 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:24:05.720695 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:24:05.722123 systemd-logind[1459]: Removed session 15. Jan 30 14:24:05.884603 systemd[1]: Started sshd@52-157.90.246.176:22-139.178.68.195:58644.service - OpenSSH per-connection server daemon (139.178.68.195:58644). Jan 30 14:24:06.884696 sshd[6297]: Accepted publickey for core from 139.178.68.195 port 58644 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:24:06.886883 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:06.897254 systemd-logind[1459]: New session 16 of user core. Jan 30 14:24:06.902685 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:24:07.797263 sshd[6297]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:07.802764 systemd[1]: sshd@52-157.90.246.176:22-139.178.68.195:58644.service: Deactivated successfully. Jan 30 14:24:07.806906 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:24:07.808375 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:24:07.809482 systemd-logind[1459]: Removed session 16. Jan 30 14:24:07.969969 systemd[1]: Started sshd@53-157.90.246.176:22-139.178.68.195:58648.service - OpenSSH per-connection server daemon (139.178.68.195:58648). Jan 30 14:24:08.949836 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 58648 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:24:08.950900 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:08.956315 systemd-logind[1459]: New session 17 of user core. Jan 30 14:24:08.964453 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:24:09.701538 sshd[6308]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:09.707481 systemd[1]: sshd@53-157.90.246.176:22-139.178.68.195:58648.service: Deactivated successfully. Jan 30 14:24:09.711333 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:24:09.714611 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:24:09.716035 systemd-logind[1459]: Removed session 17. Jan 30 14:24:14.875567 systemd[1]: Started sshd@54-157.90.246.176:22-139.178.68.195:58658.service - OpenSSH per-connection server daemon (139.178.68.195:58658). Jan 30 14:24:15.854663 sshd[6345]: Accepted publickey for core from 139.178.68.195 port 58658 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:24:15.856622 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:15.862123 systemd-logind[1459]: New session 18 of user core. Jan 30 14:24:15.869439 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:24:16.606614 sshd[6345]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:16.612254 systemd[1]: sshd@54-157.90.246.176:22-139.178.68.195:58658.service: Deactivated successfully. Jan 30 14:24:16.616981 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:24:16.618325 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:24:16.619798 systemd-logind[1459]: Removed session 18. Jan 30 14:24:21.779984 systemd[1]: Started sshd@55-157.90.246.176:22-139.178.68.195:42114.service - OpenSSH per-connection server daemon (139.178.68.195:42114). Jan 30 14:24:22.613656 systemd[1]: Started sshd@56-157.90.246.176:22-178.128.232.125:44834.service - OpenSSH per-connection server daemon (178.128.232.125:44834). Jan 30 14:24:22.756233 sshd[6378]: Accepted publickey for core from 139.178.68.195 port 42114 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:24:22.757881 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:22.764329 systemd-logind[1459]: New session 19 of user core. Jan 30 14:24:22.765740 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:24:23.229586 sshd[6381]: Invalid user repository from 178.128.232.125 port 44834 Jan 30 14:24:23.339759 sshd[6381]: Received disconnect from 178.128.232.125 port 44834:11: Bye Bye [preauth] Jan 30 14:24:23.339759 sshd[6381]: Disconnected from invalid user repository 178.128.232.125 port 44834 [preauth] Jan 30 14:24:23.341493 systemd[1]: sshd@56-157.90.246.176:22-178.128.232.125:44834.service: Deactivated successfully. Jan 30 14:24:23.518069 sshd[6378]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:23.523057 systemd[1]: sshd@55-157.90.246.176:22-139.178.68.195:42114.service: Deactivated successfully. Jan 30 14:24:23.526767 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:24:23.529389 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:24:23.530888 systemd-logind[1459]: Removed session 19. Jan 30 14:24:23.934560 systemd[1]: Started sshd@57-157.90.246.176:22-162.240.226.19:58286.service - OpenSSH per-connection server daemon (162.240.226.19:58286). Jan 30 14:24:24.830432 sshd[6396]: Invalid user sri from 162.240.226.19 port 58286 Jan 30 14:24:24.992977 sshd[6396]: Received disconnect from 162.240.226.19 port 58286:11: Bye Bye [preauth] Jan 30 14:24:24.992977 sshd[6396]: Disconnected from invalid user sri 162.240.226.19 port 58286 [preauth] Jan 30 14:24:24.995018 systemd[1]: sshd@57-157.90.246.176:22-162.240.226.19:58286.service: Deactivated successfully. Jan 30 14:24:38.208947 systemd[1]: cri-containerd-8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5.scope: Deactivated successfully. Jan 30 14:24:38.209273 systemd[1]: cri-containerd-8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5.scope: Consumed 6.640s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 30 14:24:38.252267 containerd[1487]: time="2025-01-30T14:24:38.249115551Z" level=info msg="shim disconnected" id=8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5 namespace=k8s.io Jan 30 14:24:38.252267 containerd[1487]: time="2025-01-30T14:24:38.249221155Z" level=warning msg="cleaning up after shim disconnected" id=8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5 namespace=k8s.io Jan 30 14:24:38.252267 containerd[1487]: time="2025-01-30T14:24:38.249231915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:38.255351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5-rootfs.mount: Deactivated successfully. Jan 30 14:24:38.650422 kubelet[2736]: E0130 14:24:38.649732 2736 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57896->10.0.0.2:2379: read: connection timed out" Jan 30 14:24:39.026857 kubelet[2736]: I0130 14:24:39.026685 2736 scope.go:117] "RemoveContainer" containerID="8e75e403191bb3953c957b8be403632f7b78e82943e0c57a9facd201cdc9f3e5" Jan 30 14:24:39.029892 containerd[1487]: time="2025-01-30T14:24:39.029547316Z" level=info msg="CreateContainer within sandbox \"1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 14:24:39.053459 containerd[1487]: time="2025-01-30T14:24:39.053382694Z" level=info msg="CreateContainer within sandbox \"1a21525658b392dbc75aea3ee749701cf6019b3677775f1b06f49aecdf18b433\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3897e8497379db1c9a74a68d2fcd2cdf10fca214cfc12beb2ca499eebe5174c8\"" Jan 30 14:24:39.055000 containerd[1487]: time="2025-01-30T14:24:39.054961034Z" level=info msg="StartContainer for \"3897e8497379db1c9a74a68d2fcd2cdf10fca214cfc12beb2ca499eebe5174c8\"" Jan 30 14:24:39.096476 systemd[1]: Started cri-containerd-3897e8497379db1c9a74a68d2fcd2cdf10fca214cfc12beb2ca499eebe5174c8.scope - libcontainer container 3897e8497379db1c9a74a68d2fcd2cdf10fca214cfc12beb2ca499eebe5174c8. Jan 30 14:24:39.140846 containerd[1487]: time="2025-01-30T14:24:39.140658862Z" level=info msg="StartContainer for \"3897e8497379db1c9a74a68d2fcd2cdf10fca214cfc12beb2ca499eebe5174c8\" returns successfully" Jan 30 14:24:39.806611 systemd[1]: cri-containerd-416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022.scope: Deactivated successfully. Jan 30 14:24:39.806884 systemd[1]: cri-containerd-416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022.scope: Consumed 6.546s CPU time. Jan 30 14:24:39.834791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022-rootfs.mount: Deactivated successfully. Jan 30 14:24:39.837482 containerd[1487]: time="2025-01-30T14:24:39.837392907Z" level=info msg="shim disconnected" id=416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022 namespace=k8s.io Jan 30 14:24:39.837482 containerd[1487]: time="2025-01-30T14:24:39.837474910Z" level=warning msg="cleaning up after shim disconnected" id=416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022 namespace=k8s.io Jan 30 14:24:39.837482 containerd[1487]: time="2025-01-30T14:24:39.837484230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:40.033095 kubelet[2736]: I0130 14:24:40.033051 2736 scope.go:117] "RemoveContainer" containerID="416676058382877d6b61ae2917e06bfbf249ed29b2d532e6eb2e143bba4ad022" Jan 30 14:24:40.047901 containerd[1487]: time="2025-01-30T14:24:40.047060832Z" level=info msg="CreateContainer within sandbox \"e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 14:24:40.072639 containerd[1487]: time="2025-01-30T14:24:40.072521064Z" level=info msg="CreateContainer within sandbox \"e0a982e46a1001f6a968e76e1bcbf54cda00cdf546edec0ca4724821d679eaed\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3b31c21586b95743ff04b64a25e09281e8030d06c645ae0a3a13ffa2475210e3\"" Jan 30 14:24:40.074436 containerd[1487]: time="2025-01-30T14:24:40.073829633Z" level=info msg="StartContainer for \"3b31c21586b95743ff04b64a25e09281e8030d06c645ae0a3a13ffa2475210e3\"" Jan 30 14:24:40.110437 systemd[1]: Started cri-containerd-3b31c21586b95743ff04b64a25e09281e8030d06c645ae0a3a13ffa2475210e3.scope - libcontainer container 3b31c21586b95743ff04b64a25e09281e8030d06c645ae0a3a13ffa2475210e3. Jan 30 14:24:40.145591 containerd[1487]: time="2025-01-30T14:24:40.145466192Z" level=info msg="StartContainer for \"3b31c21586b95743ff04b64a25e09281e8030d06c645ae0a3a13ffa2475210e3\" returns successfully" Jan 30 14:24:43.008017 kubelet[2736]: E0130 14:24:43.000740 2736 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57706->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-2-dd601a010b.181f7e80bcd61bbc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-2-dd601a010b,UID:b1b9b17d3e887d78b0d7ee8426772742,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-dd601a010b,},FirstTimestamp:2025-01-30 14:24:32.577362876 +0000 UTC m=+358.950631222,LastTimestamp:2025-01-30 14:24:32.577362876 +0000 UTC m=+358.950631222,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-dd601a010b,}" Jan 30 14:24:43.776346 systemd[1]: cri-containerd-fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0.scope: Deactivated successfully. Jan 30 14:24:43.777297 systemd[1]: cri-containerd-fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0.scope: Consumed 2.357s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 30 14:24:43.801904 containerd[1487]: time="2025-01-30T14:24:43.801669570Z" level=info msg="shim disconnected" id=fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0 namespace=k8s.io Jan 30 14:24:43.801904 containerd[1487]: time="2025-01-30T14:24:43.801735493Z" level=warning msg="cleaning up after shim disconnected" id=fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0 namespace=k8s.io Jan 30 14:24:43.801904 containerd[1487]: time="2025-01-30T14:24:43.801745453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:43.804072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0-rootfs.mount: Deactivated successfully. Jan 30 14:24:44.049127 kubelet[2736]: I0130 14:24:44.049058 2736 scope.go:117] "RemoveContainer" containerID="fdf1124156f6f9b2a1ca8486e149bba51bf59e74f8fef454867bd0ad208636f0" Jan 30 14:24:44.051902 containerd[1487]: time="2025-01-30T14:24:44.051841311Z" level=info msg="CreateContainer within sandbox \"89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 14:24:44.075456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943943615.mount: Deactivated successfully. Jan 30 14:24:44.080729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752931502.mount: Deactivated successfully. Jan 30 14:24:44.081489 containerd[1487]: time="2025-01-30T14:24:44.081223978Z" level=info msg="CreateContainer within sandbox \"89c2d376d03be764c12b8bbc30ada8f68e4ef74eec8f51f47c86157d5d21139a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6fe887f81f38b03bc7686dd7758430f5a27f0c213bf4579e82fd867ee9410266\"" Jan 30 14:24:44.082069 containerd[1487]: time="2025-01-30T14:24:44.081931083Z" level=info msg="StartContainer for \"6fe887f81f38b03bc7686dd7758430f5a27f0c213bf4579e82fd867ee9410266\"" Jan 30 14:24:44.116469 systemd[1]: Started cri-containerd-6fe887f81f38b03bc7686dd7758430f5a27f0c213bf4579e82fd867ee9410266.scope - libcontainer container 6fe887f81f38b03bc7686dd7758430f5a27f0c213bf4579e82fd867ee9410266. Jan 30 14:24:44.157577 containerd[1487]: time="2025-01-30T14:24:44.157457986Z" level=info msg="StartContainer for \"6fe887f81f38b03bc7686dd7758430f5a27f0c213bf4579e82fd867ee9410266\" returns successfully"