Dec 13 01:27:58.285947 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:27:58.285968 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:27:58.285976 kernel: KASLR enabled Dec 13 01:27:58.285982 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:27:58.285989 kernel: printk: bootconsole [pl11] enabled Dec 13 01:27:58.285994 kernel: efi: EFI v2.7 by EDK II Dec 13 01:27:58.286001 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:27:58.286007 kernel: random: crng init done Dec 13 01:27:58.286013 kernel: ACPI: Early table checksum verification disabled Dec 13 01:27:58.286019 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:27:58.286025 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286031 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286039 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:27:58.286045 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286052 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286059 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286065 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286073 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286079 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286085 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:27:58.286092 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286098 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:27:58.286104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:27:58.286110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:27:58.286117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:27:58.286123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:27:58.286129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:27:58.286135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:27:58.286143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:27:58.286149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:27:58.286156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:27:58.286162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:27:58.286168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:27:58.286174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:27:58.286181 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:27:58.286187 kernel: Zone ranges: Dec 13 01:27:58.286193 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:27:58.286199 kernel: DMA32 empty Dec 13 01:27:58.286205 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:58.286211 kernel: Movable zone start for each node Dec 13 01:27:58.286221 kernel: Early memory node ranges Dec 13 01:27:58.286228 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:27:58.286235 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:27:58.286241 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:27:58.286248 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:27:58.286256 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:27:58.286263 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:27:58.286269 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:58.286276 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:27:58.286283 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:27:58.286289 kernel: psci: probing for conduit method from ACPI. Dec 13 01:27:58.286296 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:27:58.286303 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:27:58.286309 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:27:58.286316 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:27:58.286322 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:27:58.286329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:27:58.286337 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:27:58.286344 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:27:58.286351 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:27:58.286357 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:27:58.286364 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:27:58.286370 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:27:58.286377 kernel: CPU features: detected: Spectre-BHB Dec 13 01:27:58.286384 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:27:58.286390 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:27:58.286397 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:27:58.286403 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:27:58.286411 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:27:58.286418 kernel: alternatives: applying boot alternatives Dec 13 01:27:58.286426 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:58.286433 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:27:58.286440 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:27:58.286446 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:27:58.286453 kernel: Fallback order for Node 0: 0 Dec 13 01:27:58.286459 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:27:58.286466 kernel: Policy zone: Normal Dec 13 01:27:58.286472 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:27:58.286479 kernel: software IO TLB: area num 2. Dec 13 01:27:58.286487 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:27:58.286494 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:27:58.286501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:27:58.286507 kernel: trace event string verifier disabled Dec 13 01:27:58.286514 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:27:58.286521 kernel: rcu: RCU event tracing is enabled. Dec 13 01:27:58.286528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:27:58.286534 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:27:58.286541 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:27:58.286548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:27:58.286555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:27:58.286563 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:27:58.286569 kernel: GICv3: 960 SPIs implemented Dec 13 01:27:58.286576 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:27:58.286582 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:27:58.286589 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:27:58.286596 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:27:58.286603 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:27:58.286609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:27:58.286616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:58.286623 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:27:58.286630 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:27:58.286637 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:27:58.286645 kernel: Console: colour dummy device 80x25 Dec 13 01:27:58.286652 kernel: printk: console [tty1] enabled Dec 13 01:27:58.286659 kernel: ACPI: Core revision 20230628 Dec 13 01:27:58.286666 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:27:58.286672 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:27:58.286679 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:27:58.286686 kernel: landlock: Up and running. Dec 13 01:27:58.286693 kernel: SELinux: Initializing. Dec 13 01:27:58.286700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.286708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.286715 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:58.286722 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:58.286729 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:27:58.286735 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:27:58.286742 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:27:58.286749 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:27:58.286762 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:27:58.286769 kernel: Remapping and enabling EFI services. Dec 13 01:27:58.286776 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:27:58.286783 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:27:58.286792 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:27:58.286799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:58.286806 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:27:58.286814 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:27:58.286821 kernel: SMP: Total of 2 processors activated. Dec 13 01:27:58.286838 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:27:58.286848 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:27:58.286855 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:27:58.286862 kernel: CPU features: detected: CRC32 instructions Dec 13 01:27:58.286869 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:27:58.286876 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:27:58.286884 kernel: CPU features: detected: Privileged Access Never Dec 13 01:27:58.286891 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:27:58.286898 kernel: alternatives: applying system-wide alternatives Dec 13 01:27:58.286905 kernel: devtmpfs: initialized Dec 13 01:27:58.286914 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:27:58.286921 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:27:58.286928 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:27:58.286935 kernel: SMBIOS 3.1.0 present. Dec 13 01:27:58.286942 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:27:58.286949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:27:58.286957 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:27:58.286964 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:27:58.286973 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:27:58.286980 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:27:58.286987 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:27:58.286994 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:27:58.287001 kernel: cpuidle: using governor menu Dec 13 01:27:58.287009 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:27:58.287016 kernel: ASID allocator initialised with 32768 entries Dec 13 01:27:58.287023 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:27:58.287030 kernel: Serial: AMBA PL011 UART driver Dec 13 01:27:58.287039 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:27:58.287046 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:27:58.287053 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:27:58.287060 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:27:58.287067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:27:58.287075 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:27:58.287082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:27:58.287089 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:27:58.287096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:27:58.287104 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:27:58.287112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:27:58.287119 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:27:58.287126 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:27:58.287133 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:27:58.287140 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:27:58.287147 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:27:58.287154 kernel: ACPI: Interpreter enabled Dec 13 01:27:58.287162 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:27:58.287169 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:27:58.287177 kernel: printk: console [ttyAMA0] enabled Dec 13 01:27:58.287184 kernel: printk: bootconsole [pl11] disabled Dec 13 01:27:58.287192 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:27:58.287199 kernel: iommu: Default domain type: Translated Dec 13 01:27:58.287206 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:27:58.287213 kernel: efivars: Registered efivars operations Dec 13 01:27:58.287220 kernel: vgaarb: loaded Dec 13 01:27:58.287227 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:27:58.287234 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:27:58.287243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:27:58.287250 kernel: pnp: PnP ACPI init Dec 13 01:27:58.287257 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:27:58.287264 kernel: NET: Registered PF_INET protocol family Dec 13 01:27:58.287271 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:27:58.287279 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:27:58.287286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:27:58.287293 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:27:58.287302 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:27:58.287309 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:27:58.287316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.287323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.287331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:27:58.287338 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:27:58.287345 kernel: kvm [1]: HYP mode not available Dec 13 01:27:58.287353 kernel: Initialise system trusted keyrings Dec 13 01:27:58.287360 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:27:58.287368 kernel: Key type asymmetric registered Dec 13 01:27:58.287375 kernel: Asymmetric key parser 'x509' registered Dec 13 01:27:58.287383 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:27:58.287390 kernel: io scheduler mq-deadline registered Dec 13 01:27:58.287397 kernel: io scheduler kyber registered Dec 13 01:27:58.287404 kernel: io scheduler bfq registered Dec 13 01:27:58.287411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:27:58.287418 kernel: thunder_xcv, ver 1.0 Dec 13 01:27:58.287425 kernel: thunder_bgx, ver 1.0 Dec 13 01:27:58.287432 kernel: nicpf, ver 1.0 Dec 13 01:27:58.287441 kernel: nicvf, ver 1.0 Dec 13 01:27:58.287567 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:27:58.287640 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:27:57 UTC (1734053277) Dec 13 01:27:58.287650 kernel: efifb: probing for efifb Dec 13 01:27:58.287658 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:27:58.287665 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:27:58.287672 kernel: efifb: scrolling: redraw Dec 13 01:27:58.287681 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:27:58.287688 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:27:58.287696 kernel: fb0: EFI VGA frame buffer device Dec 13 01:27:58.287703 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:27:58.287710 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:27:58.287717 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:27:58.287724 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:27:58.287731 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:27:58.287739 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:27:58.287748 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:27:58.287755 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:27:58.287762 kernel: Segment Routing with IPv6 Dec 13 01:27:58.287769 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:27:58.287776 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:27:58.287784 kernel: Key type dns_resolver registered Dec 13 01:27:58.287791 kernel: registered taskstats version 1 Dec 13 01:27:58.287798 kernel: Loading compiled-in X.509 certificates Dec 13 01:27:58.287805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:27:58.287812 kernel: Key type .fscrypt registered Dec 13 01:27:58.287821 kernel: Key type fscrypt-provisioning registered Dec 13 01:27:58.287839 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:27:58.287846 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:27:58.287854 kernel: ima: No architecture policies found Dec 13 01:27:58.287861 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:27:58.287868 kernel: clk: Disabling unused clocks Dec 13 01:27:58.287875 kernel: Freeing unused kernel memory: 39360K Dec 13 01:27:58.287882 kernel: Run /init as init process Dec 13 01:27:58.287892 kernel: with arguments: Dec 13 01:27:58.287899 kernel: /init Dec 13 01:27:58.287905 kernel: with environment: Dec 13 01:27:58.287912 kernel: HOME=/ Dec 13 01:27:58.287919 kernel: TERM=linux Dec 13 01:27:58.287927 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:27:58.287936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:58.287945 systemd[1]: Detected virtualization microsoft. Dec 13 01:27:58.287955 systemd[1]: Detected architecture arm64. Dec 13 01:27:58.287962 systemd[1]: Running in initrd. Dec 13 01:27:58.287970 systemd[1]: No hostname configured, using default hostname. Dec 13 01:27:58.287978 systemd[1]: Hostname set to . Dec 13 01:27:58.287986 systemd[1]: Initializing machine ID from random generator. Dec 13 01:27:58.287993 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:27:58.288001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:58.288009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:58.288019 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:27:58.288027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:58.288035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:27:58.288043 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:27:58.288052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:27:58.288061 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:27:58.288068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:58.288078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:58.288086 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:58.288094 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:58.288101 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:58.288109 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:58.288117 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:58.288125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:58.288133 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:27:58.288142 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:27:58.288150 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:58.288158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:58.288166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:58.288174 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:58.288181 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:27:58.288189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:58.288197 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:27:58.288205 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:27:58.288215 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:58.288223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:58.288247 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:27:58.288265 systemd-journald[217]: Journal started Dec 13 01:27:58.288286 systemd-journald[217]: Runtime Journal (/run/log/journal/5b59b7ce8fac4343866a4f8fab112705) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:27:58.309406 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:27:58.316294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:58.331871 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:58.331932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:27:58.346220 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:58.360476 kernel: Bridge firewalling registered Dec 13 01:27:58.355188 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:27:58.356131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:58.367513 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:27:58.378001 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:58.389714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:58.413133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:58.426149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:58.440996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:58.462975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:58.476717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:58.491391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:58.497130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:58.508813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:58.535056 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:27:58.551182 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:27:58.551430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:58.584301 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:58.567721 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:58.621000 systemd-resolved[257]: Positive Trust Anchors: Dec 13 01:27:58.621009 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:58.621040 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:58.623113 systemd-resolved[257]: Defaulting to hostname 'linux'. Dec 13 01:27:58.624045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:58.631000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:58.649793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:58.750853 kernel: SCSI subsystem initialized Dec 13 01:27:58.757852 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:27:58.767849 kernel: iscsi: registered transport (tcp) Dec 13 01:27:58.785455 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:27:58.785527 kernel: QLogic iSCSI HBA Driver Dec 13 01:27:58.819634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:58.834097 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:27:58.865128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:27:58.865188 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:27:58.871613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:27:58.934859 kernel: raid6: neonx8 gen() 15776 MB/s Dec 13 01:27:58.940848 kernel: raid6: neonx4 gen() 15653 MB/s Dec 13 01:27:58.960844 kernel: raid6: neonx2 gen() 13293 MB/s Dec 13 01:27:58.981844 kernel: raid6: neonx1 gen() 10451 MB/s Dec 13 01:27:59.001838 kernel: raid6: int64x8 gen() 6962 MB/s Dec 13 01:27:59.021843 kernel: raid6: int64x4 gen() 7350 MB/s Dec 13 01:27:59.042843 kernel: raid6: int64x2 gen() 6125 MB/s Dec 13 01:27:59.065563 kernel: raid6: int64x1 gen() 5061 MB/s Dec 13 01:27:59.065593 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Dec 13 01:27:59.089174 kernel: raid6: .... xor() 11859 MB/s, rmw enabled Dec 13 01:27:59.089204 kernel: raid6: using neon recovery algorithm Dec 13 01:27:59.101168 kernel: xor: measuring software checksum speed Dec 13 01:27:59.101201 kernel: 8regs : 19802 MB/sec Dec 13 01:27:59.104770 kernel: 32regs : 19646 MB/sec Dec 13 01:27:59.108456 kernel: arm64_neon : 26630 MB/sec Dec 13 01:27:59.112603 kernel: xor: using function: arm64_neon (26630 MB/sec) Dec 13 01:27:59.163052 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:27:59.172941 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:59.186959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:59.212384 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:27:59.217541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:59.244147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:27:59.260973 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:27:59.290591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:59.313068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:59.351025 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:59.370044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:27:59.395922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:59.407425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:59.419897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:59.431698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:59.447852 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:27:59.450093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:27:59.481721 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:27:59.481742 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:27:59.481752 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:27:59.478126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:59.511494 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:27:59.511653 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:27:59.511672 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:27:59.501808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:59.561397 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:27:59.561420 kernel: PTP clock support registered Dec 13 01:27:59.561430 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:27:59.561439 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:27:59.561448 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:27:59.561464 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:27:59.561473 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:27:59.501982 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:59.825768 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:27:59.825792 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:27:59.533169 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:59.555898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:59.857496 kernel: scsi host0: storvsc_host_t Dec 13 01:27:59.857683 kernel: scsi host1: storvsc_host_t Dec 13 01:27:59.857708 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:27:59.556116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:59.875431 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:27:59.817982 systemd-resolved[257]: Clock change detected. Flushing caches. Dec 13 01:27:59.839982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:59.886937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:59.906515 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: VF slot 1 added Dec 13 01:27:59.920788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:59.949206 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:27:59.949230 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:27:59.977344 kernel: hv_pci 9b03990f-f913-4116-a759-d0b4c1f5d118: PCI VMBus probing: Using version 0x10004 Dec 13 01:28:00.071353 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:28:00.071370 kernel: hv_pci 9b03990f-f913-4116-a759-d0b4c1f5d118: PCI host bridge to bus f913:00 Dec 13 01:28:00.071528 kernel: pci_bus f913:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:28:00.071655 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:28:00.071765 kernel: pci_bus f913:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:28:00.071850 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:28:00.071941 kernel: pci f913:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:28:00.072049 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:28:00.072131 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:28:00.072224 kernel: pci f913:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:28:00.072314 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:28:00.072395 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:28:00.072475 kernel: pci f913:00:02.0: enabling Extended Tags Dec 13 01:28:00.072610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:00.072624 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:28:00.072707 kernel: pci f913:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f913:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:28:00.072794 kernel: pci_bus f913:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:28:00.072869 kernel: pci f913:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:59.959679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:00.022534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:00.119676 kernel: mlx5_core f913:00:02.0: enabling device (0000 -> 0002) Dec 13 01:28:00.341075 kernel: mlx5_core f913:00:02.0: firmware version: 16.30.1284 Dec 13 01:28:00.341193 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: VF registering: eth1 Dec 13 01:28:00.341278 kernel: mlx5_core f913:00:02.0 eth1: joined to eth0 Dec 13 01:28:00.341366 kernel: mlx5_core f913:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:28:00.349595 kernel: mlx5_core f913:00:02.0 enP63763s1: renamed from eth1 Dec 13 01:28:00.434645 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:28:00.494458 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (483) Dec 13 01:28:00.506530 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Dec 13 01:28:00.506723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:28:00.531648 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:28:00.544807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:28:00.562647 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:28:00.584670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:28:00.608547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:00.615508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:01.623550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:01.624428 disk-uuid[601]: The operation has completed successfully. Dec 13 01:28:01.676138 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:28:01.676233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:28:01.723637 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:28:01.736105 sh[687]: Success Dec 13 01:28:01.762288 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:28:01.922019 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:28:01.927527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:28:01.941628 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:28:01.968027 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:28:01.968075 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:01.975141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:28:01.979888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:28:01.983742 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:28:02.239005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:28:02.244810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:28:02.260752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:28:02.272002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:28:02.303845 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:02.303878 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:02.303888 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:02.320641 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:02.329387 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:28:02.343572 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:02.349359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:28:02.362735 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:28:02.416884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:02.437619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:02.465045 systemd-networkd[871]: lo: Link UP Dec 13 01:28:02.465057 systemd-networkd[871]: lo: Gained carrier Dec 13 01:28:02.466599 systemd-networkd[871]: Enumeration completed Dec 13 01:28:02.468811 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:02.469029 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:02.469032 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:02.475219 systemd[1]: Reached target network.target - Network. Dec 13 01:28:02.562526 kernel: mlx5_core f913:00:02.0 enP63763s1: Link up Dec 13 01:28:02.599520 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: Data path switched to VF: enP63763s1 Dec 13 01:28:02.599762 systemd-networkd[871]: enP63763s1: Link UP Dec 13 01:28:02.599884 systemd-networkd[871]: eth0: Link UP Dec 13 01:28:02.600009 systemd-networkd[871]: eth0: Gained carrier Dec 13 01:28:02.600018 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:02.607779 systemd-networkd[871]: enP63763s1: Gained carrier Dec 13 01:28:02.638535 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:28:03.160340 ignition[804]: Ignition 2.19.0 Dec 13 01:28:03.160352 ignition[804]: Stage: fetch-offline Dec 13 01:28:03.164970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:03.160388 ignition[804]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.160396 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.160482 ignition[804]: parsed url from cmdline: "" Dec 13 01:28:03.160485 ignition[804]: no config URL provided Dec 13 01:28:03.160507 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:28:03.190732 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:28:03.160519 ignition[804]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:28:03.160523 ignition[804]: failed to fetch config: resource requires networking Dec 13 01:28:03.160709 ignition[804]: Ignition finished successfully Dec 13 01:28:03.208883 ignition[881]: Ignition 2.19.0 Dec 13 01:28:03.208890 ignition[881]: Stage: fetch Dec 13 01:28:03.209062 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.209072 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.209154 ignition[881]: parsed url from cmdline: "" Dec 13 01:28:03.209157 ignition[881]: no config URL provided Dec 13 01:28:03.209161 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:28:03.209168 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:28:03.209189 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:28:03.308191 ignition[881]: GET result: OK Dec 13 01:28:03.308275 ignition[881]: config has been read from IMDS userdata Dec 13 01:28:03.308317 ignition[881]: parsing config with SHA512: 827539b453ce3bc3f14a306cd3d0a7d32f745668eff48ae522f87a3bdf9057c34915b9b53a1a9170f49eb85b74afa79df232832a9c61e86109c02cacd942db65 Dec 13 01:28:03.312125 unknown[881]: fetched base config from "system" Dec 13 01:28:03.312731 ignition[881]: fetch: fetch complete Dec 13 01:28:03.312132 unknown[881]: fetched base config from "system" Dec 13 01:28:03.312737 ignition[881]: fetch: fetch passed Dec 13 01:28:03.312137 unknown[881]: fetched user config from "azure" Dec 13 01:28:03.312782 ignition[881]: Ignition finished successfully Dec 13 01:28:03.320012 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:28:03.336738 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:28:03.358857 ignition[888]: Ignition 2.19.0 Dec 13 01:28:03.364374 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:28:03.358864 ignition[888]: Stage: kargs Dec 13 01:28:03.359066 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.359076 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.360011 ignition[888]: kargs: kargs passed Dec 13 01:28:03.360061 ignition[888]: Ignition finished successfully Dec 13 01:28:03.387733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:28:03.406513 ignition[895]: Ignition 2.19.0 Dec 13 01:28:03.406525 ignition[895]: Stage: disks Dec 13 01:28:03.406715 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.413386 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:28:03.406735 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.422647 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:03.407634 ignition[895]: disks: disks passed Dec 13 01:28:03.432845 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:28:03.407684 ignition[895]: Ignition finished successfully Dec 13 01:28:03.444660 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:28:03.455186 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:28:03.463557 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:28:03.493967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:28:03.569753 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:28:03.581814 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:28:03.597679 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:28:03.646554 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:28:03.646786 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:28:03.651562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:28:03.690602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:03.697605 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:28:03.708671 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:28:03.721160 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:28:03.721193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:03.753600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Dec 13 01:28:03.729361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:28:03.774189 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:03.774210 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:03.778887 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:03.786219 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:03.785751 systemd-networkd[871]: enP63763s1: Gained IPv6LL Dec 13 01:28:03.786649 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:28:03.793675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:04.143437 coreos-metadata[917]: Dec 13 01:28:04.143 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:28:04.153166 coreos-metadata[917]: Dec 13 01:28:04.153 INFO Fetch successful Dec 13 01:28:04.158274 coreos-metadata[917]: Dec 13 01:28:04.158 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:28:04.183416 coreos-metadata[917]: Dec 13 01:28:04.183 INFO Fetch successful Dec 13 01:28:04.189118 coreos-metadata[917]: Dec 13 01:28:04.185 INFO wrote hostname ci-4081.2.1-a-ab3ee36414 to /sysroot/etc/hostname Dec 13 01:28:04.190850 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:28:04.297714 systemd-networkd[871]: eth0: Gained IPv6LL Dec 13 01:28:04.339419 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:28:04.348871 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:28:04.356412 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:28:04.383968 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:28:04.924309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:04.942697 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:28:04.955329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:28:04.976452 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:28:04.983175 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:04.997964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:28:05.013863 ignition[1034]: INFO : Ignition 2.19.0 Dec 13 01:28:05.018893 ignition[1034]: INFO : Stage: mount Dec 13 01:28:05.018893 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:05.018893 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:05.018893 ignition[1034]: INFO : mount: mount passed Dec 13 01:28:05.018893 ignition[1034]: INFO : Ignition finished successfully Dec 13 01:28:05.022519 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:28:05.049588 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:28:05.071942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:05.096556 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1044) Dec 13 01:28:05.096604 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:05.102730 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:05.106735 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:05.112717 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:05.114084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:05.136464 ignition[1062]: INFO : Ignition 2.19.0 Dec 13 01:28:05.136464 ignition[1062]: INFO : Stage: files Dec 13 01:28:05.143863 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:05.143863 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:05.143863 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:28:05.163652 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:28:05.163652 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:28:05.213480 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:28:05.220392 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:28:05.220392 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:28:05.218906 unknown[1062]: wrote ssh authorized keys file for user: core Dec 13 01:28:05.238665 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:28:05.238665 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:28:05.281142 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:28:05.376405 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:28:05.726423 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:28:06.028662 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:06.028662 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:28:06.542310 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: files passed Dec 13 01:28:06.583225 ignition[1062]: INFO : Ignition finished successfully Dec 13 01:28:06.563848 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:28:06.609782 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:28:06.620643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:28:06.640458 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:28:06.670124 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.670124 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.640559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:28:06.698841 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.658845 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:06.665910 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:28:06.695519 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:28:06.727812 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:28:06.727912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:28:06.738632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:28:06.749540 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:28:06.760742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:28:06.783760 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:28:06.795414 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:06.806820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:28:06.827725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:06.833989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:06.846709 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:28:06.857683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:28:06.857803 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:06.872866 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:28:06.878209 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:28:06.889140 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:28:06.900467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:06.910762 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:06.922333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:28:06.933243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:28:06.945467 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:28:06.955646 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:28:06.966889 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:28:06.975914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:28:06.976078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:28:06.990271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:07.002316 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:07.016883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:28:07.016996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:07.030200 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:28:07.030362 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:28:07.049041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:28:07.049208 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:07.064845 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:28:07.064992 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:28:07.075449 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:28:07.075610 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:28:07.132679 ignition[1113]: INFO : Ignition 2.19.0 Dec 13 01:28:07.132679 ignition[1113]: INFO : Stage: umount Dec 13 01:28:07.132679 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:07.132679 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:07.132679 ignition[1113]: INFO : umount: umount passed Dec 13 01:28:07.132679 ignition[1113]: INFO : Ignition finished successfully Dec 13 01:28:07.107579 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:28:07.121997 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:28:07.137251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:28:07.137411 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:07.155067 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:28:07.155178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:28:07.169303 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:28:07.169398 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:28:07.178131 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:28:07.178202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:28:07.191784 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:28:07.191847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:28:07.202357 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:28:07.202401 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:28:07.213053 systemd[1]: Stopped target network.target - Network. Dec 13 01:28:07.228991 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:28:07.229056 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:07.240331 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:28:07.251740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:28:07.262525 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:07.273793 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:28:07.283793 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:28:07.294154 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:28:07.294201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:07.304854 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:28:07.304892 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:07.315403 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:28:07.315460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:28:07.325833 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:28:07.325874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:07.335884 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:28:07.347308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:28:07.360344 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:28:07.360940 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:28:07.361024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:28:07.365520 systemd-networkd[871]: eth0: DHCPv6 lease lost Dec 13 01:28:07.378046 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:28:07.378148 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:28:07.394139 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:28:07.394272 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:28:07.610450 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: Data path switched from VF: enP63763s1 Dec 13 01:28:07.404432 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:28:07.404536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:28:07.415239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:28:07.415289 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:07.421970 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:28:07.422027 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:07.448887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:28:07.464718 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:28:07.464793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:07.477528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:07.477580 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:07.489472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:28:07.489527 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:07.501576 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:28:07.501619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:07.514874 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:07.550830 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:28:07.550977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:07.562779 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:28:07.562830 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:07.573534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:28:07.573569 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:07.593534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:28:07.593593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:07.610552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:28:07.610603 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:07.621356 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:28:07.621400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:07.657732 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:28:07.664456 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:28:07.861402 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:28:07.664532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:07.671824 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:28:07.671923 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:07.685446 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:28:07.685522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:07.698850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:07.698895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:07.712500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:28:07.712596 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:28:07.734460 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:28:07.734647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:28:07.745125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:28:07.774727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:28:07.789564 systemd[1]: Switching root. Dec 13 01:28:07.930188 systemd-journald[217]: Journal stopped Dec 13 01:27:58.285947 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:27:58.285968 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:27:58.285976 kernel: KASLR enabled Dec 13 01:27:58.285982 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:27:58.285989 kernel: printk: bootconsole [pl11] enabled Dec 13 01:27:58.285994 kernel: efi: EFI v2.7 by EDK II Dec 13 01:27:58.286001 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:27:58.286007 kernel: random: crng init done Dec 13 01:27:58.286013 kernel: ACPI: Early table checksum verification disabled Dec 13 01:27:58.286019 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:27:58.286025 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286031 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286039 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:27:58.286045 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286052 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286059 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286065 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286073 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286079 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286085 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:27:58.286092 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:58.286098 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:27:58.286104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:27:58.286110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:27:58.286117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:27:58.286123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:27:58.286129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:27:58.286135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:27:58.286143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:27:58.286149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:27:58.286156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:27:58.286162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:27:58.286168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:27:58.286174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:27:58.286181 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:27:58.286187 kernel: Zone ranges: Dec 13 01:27:58.286193 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:27:58.286199 kernel: DMA32 empty Dec 13 01:27:58.286205 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:58.286211 kernel: Movable zone start for each node Dec 13 01:27:58.286221 kernel: Early memory node ranges Dec 13 01:27:58.286228 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:27:58.286235 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:27:58.286241 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:27:58.286248 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:27:58.286256 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:27:58.286263 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:27:58.286269 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:58.286276 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:27:58.286283 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:27:58.286289 kernel: psci: probing for conduit method from ACPI. Dec 13 01:27:58.286296 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:27:58.286303 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:27:58.286309 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:27:58.286316 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:27:58.286322 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:27:58.286329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:27:58.286337 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:27:58.286344 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:27:58.286351 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:27:58.286357 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:27:58.286364 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:27:58.286370 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:27:58.286377 kernel: CPU features: detected: Spectre-BHB Dec 13 01:27:58.286384 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:27:58.286390 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:27:58.286397 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:27:58.286403 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:27:58.286411 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:27:58.286418 kernel: alternatives: applying boot alternatives Dec 13 01:27:58.286426 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:58.286433 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:27:58.286440 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:27:58.286446 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:27:58.286453 kernel: Fallback order for Node 0: 0 Dec 13 01:27:58.286459 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:27:58.286466 kernel: Policy zone: Normal Dec 13 01:27:58.286472 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:27:58.286479 kernel: software IO TLB: area num 2. Dec 13 01:27:58.286487 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:27:58.286494 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:27:58.286501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:27:58.286507 kernel: trace event string verifier disabled Dec 13 01:27:58.286514 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:27:58.286521 kernel: rcu: RCU event tracing is enabled. Dec 13 01:27:58.286528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:27:58.286534 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:27:58.286541 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:27:58.286548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:27:58.286555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:27:58.286563 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:27:58.286569 kernel: GICv3: 960 SPIs implemented Dec 13 01:27:58.286576 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:27:58.286582 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:27:58.286589 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:27:58.286596 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:27:58.286603 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:27:58.286609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:27:58.286616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:58.286623 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:27:58.286630 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:27:58.286637 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:27:58.286645 kernel: Console: colour dummy device 80x25 Dec 13 01:27:58.286652 kernel: printk: console [tty1] enabled Dec 13 01:27:58.286659 kernel: ACPI: Core revision 20230628 Dec 13 01:27:58.286666 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:27:58.286672 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:27:58.286679 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:27:58.286686 kernel: landlock: Up and running. Dec 13 01:27:58.286693 kernel: SELinux: Initializing. Dec 13 01:27:58.286700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.286708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.286715 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:58.286722 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:58.286729 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:27:58.286735 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:27:58.286742 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:27:58.286749 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:27:58.286762 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:27:58.286769 kernel: Remapping and enabling EFI services. Dec 13 01:27:58.286776 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:27:58.286783 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:27:58.286792 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:27:58.286799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:58.286806 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:27:58.286814 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:27:58.286821 kernel: SMP: Total of 2 processors activated. Dec 13 01:27:58.286838 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:27:58.286848 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:27:58.286855 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:27:58.286862 kernel: CPU features: detected: CRC32 instructions Dec 13 01:27:58.286869 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:27:58.286876 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:27:58.286884 kernel: CPU features: detected: Privileged Access Never Dec 13 01:27:58.286891 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:27:58.286898 kernel: alternatives: applying system-wide alternatives Dec 13 01:27:58.286905 kernel: devtmpfs: initialized Dec 13 01:27:58.286914 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:27:58.286921 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:27:58.286928 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:27:58.286935 kernel: SMBIOS 3.1.0 present. Dec 13 01:27:58.286942 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:27:58.286949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:27:58.286957 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:27:58.286964 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:27:58.286973 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:27:58.286980 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:27:58.286987 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:27:58.286994 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:27:58.287001 kernel: cpuidle: using governor menu Dec 13 01:27:58.287009 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:27:58.287016 kernel: ASID allocator initialised with 32768 entries Dec 13 01:27:58.287023 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:27:58.287030 kernel: Serial: AMBA PL011 UART driver Dec 13 01:27:58.287039 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:27:58.287046 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:27:58.287053 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:27:58.287060 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:27:58.287067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:27:58.287075 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:27:58.287082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:27:58.287089 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:27:58.287096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:27:58.287104 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:27:58.287112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:27:58.287119 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:27:58.287126 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:27:58.287133 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:27:58.287140 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:27:58.287147 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:27:58.287154 kernel: ACPI: Interpreter enabled Dec 13 01:27:58.287162 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:27:58.287169 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:27:58.287177 kernel: printk: console [ttyAMA0] enabled Dec 13 01:27:58.287184 kernel: printk: bootconsole [pl11] disabled Dec 13 01:27:58.287192 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:27:58.287199 kernel: iommu: Default domain type: Translated Dec 13 01:27:58.287206 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:27:58.287213 kernel: efivars: Registered efivars operations Dec 13 01:27:58.287220 kernel: vgaarb: loaded Dec 13 01:27:58.287227 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:27:58.287234 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:27:58.287243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:27:58.287250 kernel: pnp: PnP ACPI init Dec 13 01:27:58.287257 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:27:58.287264 kernel: NET: Registered PF_INET protocol family Dec 13 01:27:58.287271 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:27:58.287279 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:27:58.287286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:27:58.287293 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:27:58.287302 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:27:58.287309 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:27:58.287316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.287323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:58.287331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:27:58.287338 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:27:58.287345 kernel: kvm [1]: HYP mode not available Dec 13 01:27:58.287353 kernel: Initialise system trusted keyrings Dec 13 01:27:58.287360 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:27:58.287368 kernel: Key type asymmetric registered Dec 13 01:27:58.287375 kernel: Asymmetric key parser 'x509' registered Dec 13 01:27:58.287383 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:27:58.287390 kernel: io scheduler mq-deadline registered Dec 13 01:27:58.287397 kernel: io scheduler kyber registered Dec 13 01:27:58.287404 kernel: io scheduler bfq registered Dec 13 01:27:58.287411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:27:58.287418 kernel: thunder_xcv, ver 1.0 Dec 13 01:27:58.287425 kernel: thunder_bgx, ver 1.0 Dec 13 01:27:58.287432 kernel: nicpf, ver 1.0 Dec 13 01:27:58.287441 kernel: nicvf, ver 1.0 Dec 13 01:27:58.287567 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:27:58.287640 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:27:57 UTC (1734053277) Dec 13 01:27:58.287650 kernel: efifb: probing for efifb Dec 13 01:27:58.287658 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:27:58.287665 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:27:58.287672 kernel: efifb: scrolling: redraw Dec 13 01:27:58.287681 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:27:58.287688 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:27:58.287696 kernel: fb0: EFI VGA frame buffer device Dec 13 01:27:58.287703 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:27:58.287710 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:27:58.287717 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:27:58.287724 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:27:58.287731 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:27:58.287739 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:27:58.287748 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:27:58.287755 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:27:58.287762 kernel: Segment Routing with IPv6 Dec 13 01:27:58.287769 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:27:58.287776 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:27:58.287784 kernel: Key type dns_resolver registered Dec 13 01:27:58.287791 kernel: registered taskstats version 1 Dec 13 01:27:58.287798 kernel: Loading compiled-in X.509 certificates Dec 13 01:27:58.287805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:27:58.287812 kernel: Key type .fscrypt registered Dec 13 01:27:58.287821 kernel: Key type fscrypt-provisioning registered Dec 13 01:27:58.287839 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:27:58.287846 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:27:58.287854 kernel: ima: No architecture policies found Dec 13 01:27:58.287861 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:27:58.287868 kernel: clk: Disabling unused clocks Dec 13 01:27:58.287875 kernel: Freeing unused kernel memory: 39360K Dec 13 01:27:58.287882 kernel: Run /init as init process Dec 13 01:27:58.287892 kernel: with arguments: Dec 13 01:27:58.287899 kernel: /init Dec 13 01:27:58.287905 kernel: with environment: Dec 13 01:27:58.287912 kernel: HOME=/ Dec 13 01:27:58.287919 kernel: TERM=linux Dec 13 01:27:58.287927 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:27:58.287936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:58.287945 systemd[1]: Detected virtualization microsoft. Dec 13 01:27:58.287955 systemd[1]: Detected architecture arm64. Dec 13 01:27:58.287962 systemd[1]: Running in initrd. Dec 13 01:27:58.287970 systemd[1]: No hostname configured, using default hostname. Dec 13 01:27:58.287978 systemd[1]: Hostname set to . Dec 13 01:27:58.287986 systemd[1]: Initializing machine ID from random generator. Dec 13 01:27:58.287993 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:27:58.288001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:58.288009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:58.288019 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:27:58.288027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:58.288035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:27:58.288043 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:27:58.288052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:27:58.288061 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:27:58.288068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:58.288078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:58.288086 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:58.288094 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:58.288101 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:58.288109 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:58.288117 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:58.288125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:58.288133 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:27:58.288142 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:27:58.288150 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:58.288158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:58.288166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:58.288174 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:58.288181 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:27:58.288189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:58.288197 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:27:58.288205 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:27:58.288215 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:58.288223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:58.288247 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:27:58.288265 systemd-journald[217]: Journal started Dec 13 01:27:58.288286 systemd-journald[217]: Runtime Journal (/run/log/journal/5b59b7ce8fac4343866a4f8fab112705) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:27:58.309406 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:27:58.316294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:58.331871 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:58.331932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:27:58.346220 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:58.360476 kernel: Bridge firewalling registered Dec 13 01:27:58.355188 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:27:58.356131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:58.367513 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:27:58.378001 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:58.389714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:58.413133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:58.426149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:58.440996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:58.462975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:58.476717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:58.491391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:58.497130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:58.508813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:58.535056 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:27:58.551182 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:27:58.551430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:58.584301 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:58.567721 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:58.621000 systemd-resolved[257]: Positive Trust Anchors: Dec 13 01:27:58.621009 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:58.621040 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:58.623113 systemd-resolved[257]: Defaulting to hostname 'linux'. Dec 13 01:27:58.624045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:58.631000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:58.649793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:58.750853 kernel: SCSI subsystem initialized Dec 13 01:27:58.757852 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:27:58.767849 kernel: iscsi: registered transport (tcp) Dec 13 01:27:58.785455 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:27:58.785527 kernel: QLogic iSCSI HBA Driver Dec 13 01:27:58.819634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:58.834097 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:27:58.865128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:27:58.865188 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:27:58.871613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:27:58.934859 kernel: raid6: neonx8 gen() 15776 MB/s Dec 13 01:27:58.940848 kernel: raid6: neonx4 gen() 15653 MB/s Dec 13 01:27:58.960844 kernel: raid6: neonx2 gen() 13293 MB/s Dec 13 01:27:58.981844 kernel: raid6: neonx1 gen() 10451 MB/s Dec 13 01:27:59.001838 kernel: raid6: int64x8 gen() 6962 MB/s Dec 13 01:27:59.021843 kernel: raid6: int64x4 gen() 7350 MB/s Dec 13 01:27:59.042843 kernel: raid6: int64x2 gen() 6125 MB/s Dec 13 01:27:59.065563 kernel: raid6: int64x1 gen() 5061 MB/s Dec 13 01:27:59.065593 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Dec 13 01:27:59.089174 kernel: raid6: .... xor() 11859 MB/s, rmw enabled Dec 13 01:27:59.089204 kernel: raid6: using neon recovery algorithm Dec 13 01:27:59.101168 kernel: xor: measuring software checksum speed Dec 13 01:27:59.101201 kernel: 8regs : 19802 MB/sec Dec 13 01:27:59.104770 kernel: 32regs : 19646 MB/sec Dec 13 01:27:59.108456 kernel: arm64_neon : 26630 MB/sec Dec 13 01:27:59.112603 kernel: xor: using function: arm64_neon (26630 MB/sec) Dec 13 01:27:59.163052 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:27:59.172941 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:59.186959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:59.212384 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:27:59.217541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:59.244147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:27:59.260973 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:27:59.290591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:59.313068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:59.351025 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:59.370044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:27:59.395922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:59.407425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:59.419897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:59.431698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:59.447852 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:27:59.450093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:27:59.481721 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:27:59.481742 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:27:59.481752 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:27:59.478126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:59.511494 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:27:59.511653 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:27:59.511672 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:27:59.501808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:59.561397 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:27:59.561420 kernel: PTP clock support registered Dec 13 01:27:59.561430 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:27:59.561439 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:27:59.561448 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:27:59.561464 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:27:59.561473 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:27:59.501982 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:59.825768 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:27:59.825792 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:27:59.533169 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:59.555898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:59.857496 kernel: scsi host0: storvsc_host_t Dec 13 01:27:59.857683 kernel: scsi host1: storvsc_host_t Dec 13 01:27:59.857708 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:27:59.556116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:59.875431 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:27:59.817982 systemd-resolved[257]: Clock change detected. Flushing caches. Dec 13 01:27:59.839982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:59.886937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:59.906515 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: VF slot 1 added Dec 13 01:27:59.920788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:59.949206 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:27:59.949230 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:27:59.977344 kernel: hv_pci 9b03990f-f913-4116-a759-d0b4c1f5d118: PCI VMBus probing: Using version 0x10004 Dec 13 01:28:00.071353 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:28:00.071370 kernel: hv_pci 9b03990f-f913-4116-a759-d0b4c1f5d118: PCI host bridge to bus f913:00 Dec 13 01:28:00.071528 kernel: pci_bus f913:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:28:00.071655 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:28:00.071765 kernel: pci_bus f913:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:28:00.071850 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:28:00.071941 kernel: pci f913:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:28:00.072049 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:28:00.072131 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:28:00.072224 kernel: pci f913:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:28:00.072314 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:28:00.072395 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:28:00.072475 kernel: pci f913:00:02.0: enabling Extended Tags Dec 13 01:28:00.072610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:00.072624 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:28:00.072707 kernel: pci f913:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f913:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:28:00.072794 kernel: pci_bus f913:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:28:00.072869 kernel: pci f913:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:59.959679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:00.022534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:00.119676 kernel: mlx5_core f913:00:02.0: enabling device (0000 -> 0002) Dec 13 01:28:00.341075 kernel: mlx5_core f913:00:02.0: firmware version: 16.30.1284 Dec 13 01:28:00.341193 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: VF registering: eth1 Dec 13 01:28:00.341278 kernel: mlx5_core f913:00:02.0 eth1: joined to eth0 Dec 13 01:28:00.341366 kernel: mlx5_core f913:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:28:00.349595 kernel: mlx5_core f913:00:02.0 enP63763s1: renamed from eth1 Dec 13 01:28:00.434645 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:28:00.494458 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (483) Dec 13 01:28:00.506530 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Dec 13 01:28:00.506723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:28:00.531648 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:28:00.544807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:28:00.562647 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:28:00.584670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:28:00.608547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:00.615508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:01.623550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:28:01.624428 disk-uuid[601]: The operation has completed successfully. Dec 13 01:28:01.676138 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:28:01.676233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:28:01.723637 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:28:01.736105 sh[687]: Success Dec 13 01:28:01.762288 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:28:01.922019 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:28:01.927527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:28:01.941628 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:28:01.968027 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:28:01.968075 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:01.975141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:28:01.979888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:28:01.983742 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:28:02.239005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:28:02.244810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:28:02.260752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:28:02.272002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:28:02.303845 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:02.303878 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:02.303888 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:02.320641 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:02.329387 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:28:02.343572 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:02.349359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:28:02.362735 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:28:02.416884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:02.437619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:02.465045 systemd-networkd[871]: lo: Link UP Dec 13 01:28:02.465057 systemd-networkd[871]: lo: Gained carrier Dec 13 01:28:02.466599 systemd-networkd[871]: Enumeration completed Dec 13 01:28:02.468811 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:02.469029 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:02.469032 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:02.475219 systemd[1]: Reached target network.target - Network. Dec 13 01:28:02.562526 kernel: mlx5_core f913:00:02.0 enP63763s1: Link up Dec 13 01:28:02.599520 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: Data path switched to VF: enP63763s1 Dec 13 01:28:02.599762 systemd-networkd[871]: enP63763s1: Link UP Dec 13 01:28:02.599884 systemd-networkd[871]: eth0: Link UP Dec 13 01:28:02.600009 systemd-networkd[871]: eth0: Gained carrier Dec 13 01:28:02.600018 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:02.607779 systemd-networkd[871]: enP63763s1: Gained carrier Dec 13 01:28:02.638535 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:28:03.160340 ignition[804]: Ignition 2.19.0 Dec 13 01:28:03.160352 ignition[804]: Stage: fetch-offline Dec 13 01:28:03.164970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:03.160388 ignition[804]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.160396 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.160482 ignition[804]: parsed url from cmdline: "" Dec 13 01:28:03.160485 ignition[804]: no config URL provided Dec 13 01:28:03.160507 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:28:03.190732 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:28:03.160519 ignition[804]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:28:03.160523 ignition[804]: failed to fetch config: resource requires networking Dec 13 01:28:03.160709 ignition[804]: Ignition finished successfully Dec 13 01:28:03.208883 ignition[881]: Ignition 2.19.0 Dec 13 01:28:03.208890 ignition[881]: Stage: fetch Dec 13 01:28:03.209062 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.209072 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.209154 ignition[881]: parsed url from cmdline: "" Dec 13 01:28:03.209157 ignition[881]: no config URL provided Dec 13 01:28:03.209161 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:28:03.209168 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:28:03.209189 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:28:03.308191 ignition[881]: GET result: OK Dec 13 01:28:03.308275 ignition[881]: config has been read from IMDS userdata Dec 13 01:28:03.308317 ignition[881]: parsing config with SHA512: 827539b453ce3bc3f14a306cd3d0a7d32f745668eff48ae522f87a3bdf9057c34915b9b53a1a9170f49eb85b74afa79df232832a9c61e86109c02cacd942db65 Dec 13 01:28:03.312125 unknown[881]: fetched base config from "system" Dec 13 01:28:03.312731 ignition[881]: fetch: fetch complete Dec 13 01:28:03.312132 unknown[881]: fetched base config from "system" Dec 13 01:28:03.312737 ignition[881]: fetch: fetch passed Dec 13 01:28:03.312137 unknown[881]: fetched user config from "azure" Dec 13 01:28:03.312782 ignition[881]: Ignition finished successfully Dec 13 01:28:03.320012 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:28:03.336738 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:28:03.358857 ignition[888]: Ignition 2.19.0 Dec 13 01:28:03.364374 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:28:03.358864 ignition[888]: Stage: kargs Dec 13 01:28:03.359066 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.359076 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.360011 ignition[888]: kargs: kargs passed Dec 13 01:28:03.360061 ignition[888]: Ignition finished successfully Dec 13 01:28:03.387733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:28:03.406513 ignition[895]: Ignition 2.19.0 Dec 13 01:28:03.406525 ignition[895]: Stage: disks Dec 13 01:28:03.406715 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:03.413386 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:28:03.406735 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:03.422647 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:03.407634 ignition[895]: disks: disks passed Dec 13 01:28:03.432845 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:28:03.407684 ignition[895]: Ignition finished successfully Dec 13 01:28:03.444660 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:28:03.455186 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:28:03.463557 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:28:03.493967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:28:03.569753 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:28:03.581814 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:28:03.597679 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:28:03.646554 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:28:03.646786 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:28:03.651562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:28:03.690602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:03.697605 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:28:03.708671 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:28:03.721160 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:28:03.721193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:03.753600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Dec 13 01:28:03.729361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:28:03.774189 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:03.774210 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:03.778887 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:03.786219 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:03.785751 systemd-networkd[871]: enP63763s1: Gained IPv6LL Dec 13 01:28:03.786649 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:28:03.793675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:04.143437 coreos-metadata[917]: Dec 13 01:28:04.143 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:28:04.153166 coreos-metadata[917]: Dec 13 01:28:04.153 INFO Fetch successful Dec 13 01:28:04.158274 coreos-metadata[917]: Dec 13 01:28:04.158 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:28:04.183416 coreos-metadata[917]: Dec 13 01:28:04.183 INFO Fetch successful Dec 13 01:28:04.189118 coreos-metadata[917]: Dec 13 01:28:04.185 INFO wrote hostname ci-4081.2.1-a-ab3ee36414 to /sysroot/etc/hostname Dec 13 01:28:04.190850 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:28:04.297714 systemd-networkd[871]: eth0: Gained IPv6LL Dec 13 01:28:04.339419 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:28:04.348871 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:28:04.356412 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:28:04.383968 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:28:04.924309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:04.942697 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:28:04.955329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:28:04.976452 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:28:04.983175 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:04.997964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:28:05.013863 ignition[1034]: INFO : Ignition 2.19.0 Dec 13 01:28:05.018893 ignition[1034]: INFO : Stage: mount Dec 13 01:28:05.018893 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:05.018893 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:05.018893 ignition[1034]: INFO : mount: mount passed Dec 13 01:28:05.018893 ignition[1034]: INFO : Ignition finished successfully Dec 13 01:28:05.022519 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:28:05.049588 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:28:05.071942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:05.096556 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1044) Dec 13 01:28:05.096604 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:28:05.102730 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:28:05.106735 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:28:05.112717 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:28:05.114084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:05.136464 ignition[1062]: INFO : Ignition 2.19.0 Dec 13 01:28:05.136464 ignition[1062]: INFO : Stage: files Dec 13 01:28:05.143863 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:05.143863 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:05.143863 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:28:05.163652 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:28:05.163652 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:28:05.213480 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:28:05.220392 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:28:05.220392 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:28:05.218906 unknown[1062]: wrote ssh authorized keys file for user: core Dec 13 01:28:05.238665 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:28:05.238665 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:28:05.281142 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:28:05.376405 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:05.388717 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:28:05.726423 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:28:06.028662 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:28:06.028662 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:28:06.542310 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:28:06.553145 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:06.583225 ignition[1062]: INFO : files: files passed Dec 13 01:28:06.583225 ignition[1062]: INFO : Ignition finished successfully Dec 13 01:28:06.563848 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:28:06.609782 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:28:06.620643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:28:06.640458 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:28:06.670124 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.670124 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.640559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:28:06.698841 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:06.658845 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:06.665910 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:28:06.695519 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:28:06.727812 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:28:06.727912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:28:06.738632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:28:06.749540 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:28:06.760742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:28:06.783760 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:28:06.795414 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:06.806820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:28:06.827725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:06.833989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:06.846709 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:28:06.857683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:28:06.857803 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:06.872866 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:28:06.878209 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:28:06.889140 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:28:06.900467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:06.910762 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:06.922333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:28:06.933243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:28:06.945467 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:28:06.955646 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:28:06.966889 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:28:06.975914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:28:06.976078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:28:06.990271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:07.002316 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:07.016883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:28:07.016996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:07.030200 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:28:07.030362 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:28:07.049041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:28:07.049208 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:07.064845 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:28:07.064992 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:28:07.075449 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:28:07.075610 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:28:07.132679 ignition[1113]: INFO : Ignition 2.19.0 Dec 13 01:28:07.132679 ignition[1113]: INFO : Stage: umount Dec 13 01:28:07.132679 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:07.132679 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:28:07.132679 ignition[1113]: INFO : umount: umount passed Dec 13 01:28:07.132679 ignition[1113]: INFO : Ignition finished successfully Dec 13 01:28:07.107579 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:28:07.121997 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:28:07.137251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:28:07.137411 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:07.155067 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:28:07.155178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:28:07.169303 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:28:07.169398 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:28:07.178131 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:28:07.178202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:28:07.191784 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:28:07.191847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:28:07.202357 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:28:07.202401 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:28:07.213053 systemd[1]: Stopped target network.target - Network. Dec 13 01:28:07.228991 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:28:07.229056 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:07.240331 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:28:07.251740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:28:07.262525 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:07.273793 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:28:07.283793 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:28:07.294154 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:28:07.294201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:07.304854 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:28:07.304892 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:07.315403 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:28:07.315460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:28:07.325833 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:28:07.325874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:07.335884 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:28:07.347308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:28:07.360344 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:28:07.360940 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:28:07.361024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:28:07.365520 systemd-networkd[871]: eth0: DHCPv6 lease lost Dec 13 01:28:07.378046 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:28:07.378148 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:28:07.394139 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:28:07.394272 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:28:07.610450 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: Data path switched from VF: enP63763s1 Dec 13 01:28:07.404432 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:28:07.404536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:28:07.415239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:28:07.415289 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:07.421970 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:28:07.422027 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:07.448887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:28:07.464718 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:28:07.464793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:07.477528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:07.477580 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:07.489472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:28:07.489527 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:07.501576 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:28:07.501619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:07.514874 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:07.550830 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:28:07.550977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:07.562779 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:28:07.562830 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:07.573534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:28:07.573569 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:07.593534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:28:07.593593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:07.610552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:28:07.610603 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:07.621356 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:28:07.621400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:07.657732 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:28:07.664456 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:28:07.861402 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:28:07.664532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:07.671824 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:28:07.671923 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:07.685446 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:28:07.685522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:07.698850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:07.698895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:07.712500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:28:07.712596 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:28:07.734460 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:28:07.734647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:28:07.745125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:28:07.774727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:28:07.789564 systemd[1]: Switching root. Dec 13 01:28:07.930188 systemd-journald[217]: Journal stopped Dec 13 01:28:11.578056 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:28:11.578078 kernel: SELinux: policy capability open_perms=1 Dec 13 01:28:11.578088 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:28:11.578096 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:28:11.578105 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:28:11.578113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:28:11.578122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:28:11.578130 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:28:11.578137 kernel: audit: type=1403 audit(1734053288.692:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:28:11.578147 systemd[1]: Successfully loaded SELinux policy in 144.623ms. Dec 13 01:28:11.578158 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.954ms. Dec 13 01:28:11.578169 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:28:11.578177 systemd[1]: Detected virtualization microsoft. Dec 13 01:28:11.578188 systemd[1]: Detected architecture arm64. Dec 13 01:28:11.578197 systemd[1]: Detected first boot. Dec 13 01:28:11.578208 systemd[1]: Hostname set to . Dec 13 01:28:11.578217 systemd[1]: Initializing machine ID from random generator. Dec 13 01:28:11.578226 zram_generator::config[1156]: No configuration found. Dec 13 01:28:11.578235 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:28:11.578244 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:28:11.578253 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:28:11.578262 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:28:11.578273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:28:11.578282 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:28:11.578291 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:28:11.578301 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:28:11.578310 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:28:11.578319 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:28:11.578328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:28:11.578339 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:28:11.578348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:11.578357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:11.578367 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:28:11.578376 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:28:11.578387 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:28:11.578397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:28:11.578406 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:28:11.578416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:11.578426 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:28:11.578435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:28:11.578446 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:28:11.578456 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:28:11.578465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:11.578474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:28:11.578484 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:28:11.578506 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:28:11.578515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:28:11.578525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:28:11.578534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:11.578543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:11.578553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:11.578565 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:28:11.578680 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:28:11.578697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:28:11.578707 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:28:11.578719 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:28:11.578728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:28:11.578738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:28:11.578750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:28:11.578760 systemd[1]: Reached target machines.target - Containers. Dec 13 01:28:11.578770 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:28:11.578779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:11.578789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:28:11.578798 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:28:11.578808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:11.578817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:28:11.578828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:11.578838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:28:11.578847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:11.578857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:28:11.578867 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:28:11.578876 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:28:11.578886 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:28:11.578895 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:28:11.578905 kernel: ACPI: bus type drm_connector registered Dec 13 01:28:11.578914 kernel: loop: module loaded Dec 13 01:28:11.578924 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:28:11.578933 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:28:11.578943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:28:11.578952 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:28:11.578980 systemd-journald[1259]: Collecting audit messages is disabled. Dec 13 01:28:11.579004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:28:11.579014 systemd-journald[1259]: Journal started Dec 13 01:28:11.579034 systemd-journald[1259]: Runtime Journal (/run/log/journal/20f24bc064e14d168631617b67a48b99) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:28:10.615877 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:28:10.739260 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:28:10.739634 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:28:10.739921 systemd[1]: systemd-journald.service: Consumed 2.986s CPU time. Dec 13 01:28:11.597633 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:28:11.597684 systemd[1]: Stopped verity-setup.service. Dec 13 01:28:11.597698 kernel: fuse: init (API version 7.39) Dec 13 01:28:11.618538 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:28:11.619821 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:28:11.625813 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:28:11.631994 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:28:11.637012 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:28:11.642689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:28:11.649327 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:28:11.654359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:28:11.662054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:11.669876 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:28:11.670011 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:28:11.676718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:11.677568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:11.683730 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:28:11.683861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:28:11.689944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:11.691521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:11.699266 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:28:11.699392 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:28:11.705277 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:11.705393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:11.711263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:11.717197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:28:11.724157 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:28:11.730796 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:11.746080 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:28:11.762624 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:28:11.770075 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:28:11.775877 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:28:11.775914 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:28:11.781992 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:28:11.789826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:28:11.796897 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:28:11.802317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:11.803987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:28:11.811668 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:28:11.817947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:28:11.819845 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:28:11.827799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:28:11.829978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:11.840683 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:28:11.848766 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:28:11.856157 systemd-journald[1259]: Time spent on flushing to /var/log/journal/20f24bc064e14d168631617b67a48b99 is 23.788ms for 895 entries. Dec 13 01:28:11.856157 systemd-journald[1259]: System Journal (/var/log/journal/20f24bc064e14d168631617b67a48b99) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:28:11.907697 systemd-journald[1259]: Received client request to flush runtime journal. Dec 13 01:28:11.907738 kernel: loop0: detected capacity change from 0 to 31320 Dec 13 01:28:11.869699 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:28:11.879664 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:28:11.889946 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:28:11.900116 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:28:11.912913 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:28:11.925929 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:28:11.939820 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:28:11.954805 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:28:11.961624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:11.969678 udevadm[1293]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:28:11.993683 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:28:11.994972 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:28:12.003356 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 01:28:12.003385 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 01:28:12.006880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:12.021640 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:28:12.087261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:28:12.102751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:28:12.120579 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Dec 13 01:28:12.120596 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Dec 13 01:28:12.124197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:12.235528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:28:12.340566 kernel: loop1: detected capacity change from 0 to 189592 Dec 13 01:28:12.414513 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:28:12.777889 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:28:12.791841 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:28:12.802719 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:12.822081 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Dec 13 01:28:12.885472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:12.902703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:12.948733 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:28:12.970934 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:28:12.999245 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1338) Dec 13 01:28:12.999312 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1338) Dec 13 01:28:13.035292 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:28:13.043517 kernel: loop4: detected capacity change from 0 to 31320 Dec 13 01:28:13.064525 kernel: loop5: detected capacity change from 0 to 189592 Dec 13 01:28:13.064610 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:28:13.081747 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:28:13.081823 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:28:13.081843 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:28:13.089766 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:28:13.096829 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 01:28:13.111517 kernel: loop7: detected capacity change from 0 to 114328 Dec 13 01:28:13.134660 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:28:13.134736 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:28:13.142679 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:28:13.144011 (sd-merge)[1355]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:28:13.151715 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:28:13.144499 (sd-merge)[1355]: Merged extensions into '/usr'. Dec 13 01:28:13.149777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:13.160056 systemd[1]: Reloading requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:28:13.160166 systemd[1]: Reloading... Dec 13 01:28:13.216942 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1320) Dec 13 01:28:13.280537 zram_generator::config[1442]: No configuration found. Dec 13 01:28:13.280437 systemd-networkd[1329]: lo: Link UP Dec 13 01:28:13.280441 systemd-networkd[1329]: lo: Gained carrier Dec 13 01:28:13.282924 systemd-networkd[1329]: Enumeration completed Dec 13 01:28:13.284687 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:13.284701 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:13.344513 kernel: mlx5_core f913:00:02.0 enP63763s1: Link up Dec 13 01:28:13.371512 kernel: hv_netvsc 0022487d-d035-0022-487d-d0350022487d eth0: Data path switched to VF: enP63763s1 Dec 13 01:28:13.371816 systemd-networkd[1329]: enP63763s1: Link UP Dec 13 01:28:13.371957 systemd-networkd[1329]: eth0: Link UP Dec 13 01:28:13.371960 systemd-networkd[1329]: eth0: Gained carrier Dec 13 01:28:13.371975 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:13.374765 systemd-networkd[1329]: enP63763s1: Gained carrier Dec 13 01:28:13.381779 systemd-networkd[1329]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:28:13.405659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:13.475171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:28:13.481858 systemd[1]: Reloading finished in 321 ms. Dec 13 01:28:13.511652 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:13.519550 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:28:13.526710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:13.557757 systemd[1]: Starting ensure-sysext.service... Dec 13 01:28:13.563743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:28:13.576681 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:28:13.585459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:28:13.591756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:13.592144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:13.599625 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:13.610794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:13.618106 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:28:13.618246 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:28:13.618837 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:28:13.619577 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:28:13.619904 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Dec 13 01:28:13.619951 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Dec 13 01:28:13.629436 systemd[1]: Reloading requested from client PID 1491 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:28:13.629455 systemd[1]: Reloading... Dec 13 01:28:13.640800 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:28:13.641944 systemd-tmpfiles[1495]: Skipping /boot Dec 13 01:28:13.652385 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:28:13.652400 systemd-tmpfiles[1495]: Skipping /boot Dec 13 01:28:13.709530 zram_generator::config[1533]: No configuration found. Dec 13 01:28:13.813992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:13.884808 systemd[1]: Reloading finished in 254 ms. Dec 13 01:28:13.902736 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:28:13.914933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:13.924634 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:13.949733 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:13.987737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:28:13.996569 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:28:14.005751 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:28:14.015751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:28:14.022812 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:28:14.033095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.042592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.059354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.075253 lvm[1602]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:28:14.075819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.085943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.086988 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:28:14.095319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.095459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.102545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.104485 augenrules[1617]: No rules Dec 13 01:28:14.105341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.113648 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:14.120456 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:28:14.127506 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.127638 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.134736 systemd-resolved[1608]: Positive Trust Anchors: Dec 13 01:28:14.135081 systemd-resolved[1608]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:28:14.135200 systemd-resolved[1608]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:28:14.141762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:14.148166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.153689 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:28:14.162752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.166262 systemd-resolved[1608]: Using system hostname 'ci-4081.2.1-a-ab3ee36414'. Dec 13 01:28:14.168073 lvm[1628]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:28:14.177770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.194272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.200078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.200853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:28:14.208608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:28:14.215906 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:28:14.223425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.223595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.230409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.230757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.238191 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.238334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.254429 systemd[1]: Reached target network.target - Network. Dec 13 01:28:14.259536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:14.266520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.271623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.278476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:28:14.285293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.293719 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.299863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.299925 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:28:14.305988 systemd[1]: Finished ensure-sysext.service. Dec 13 01:28:14.310416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.310584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.317198 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:28:14.317341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:28:14.323664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.323794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.330977 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.331121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.340370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:28:14.340458 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:28:14.601730 systemd-networkd[1329]: enP63763s1: Gained IPv6LL Dec 13 01:28:14.721724 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:28:14.728771 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:28:15.242718 systemd-networkd[1329]: eth0: Gained IPv6LL Dec 13 01:28:15.244355 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:28:15.251690 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:28:15.978467 ldconfig[1285]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:28:15.998000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:28:16.008667 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:28:16.022204 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:28:16.028474 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:28:16.034323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:28:16.041253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:28:16.048072 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:28:16.053683 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:28:16.060881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:28:16.067185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:28:16.067214 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:28:16.072291 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:28:16.089522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:28:16.096365 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:28:16.108093 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:28:16.113725 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:28:16.120065 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:28:16.124903 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:28:16.129473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:28:16.129510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:28:16.140609 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:28:16.147625 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:28:16.163967 (chronyd)[1654]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:28:16.172414 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:28:16.179685 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:28:16.189048 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:28:16.198370 jq[1660]: false Dec 13 01:28:16.201601 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:28:16.200884 chronyd[1663]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:28:16.209111 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:28:16.209158 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:28:16.210337 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:28:16.216368 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:28:16.218631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:16.220902 KVP[1664]: KVP starting; pid is:1664 Dec 13 01:28:16.227284 chronyd[1663]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:28:16.227545 chronyd[1663]: Loaded seccomp filter (level 2) Dec 13 01:28:16.238106 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:28:16.246441 KVP[1664]: KVP LIC Version: 3.1 Dec 13 01:28:16.246529 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:28:16.251728 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:28:16.259369 dbus-daemon[1657]: [system] SELinux support is enabled Dec 13 01:28:16.259870 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:28:16.270896 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:28:16.281068 extend-filesystems[1661]: Found loop4 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found loop5 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found loop6 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found loop7 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda1 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda2 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda3 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found usr Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda4 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda6 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda7 Dec 13 01:28:16.291936 extend-filesystems[1661]: Found sda9 Dec 13 01:28:16.291936 extend-filesystems[1661]: Checking size of /dev/sda9 Dec 13 01:28:16.283678 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.367 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.373 INFO Fetch successful Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.373 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.377 INFO Fetch successful Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.378 INFO Fetching http://168.63.129.16/machine/ff624604-cfed-4028-a02a-d26f7f915d23/8effca51%2Dd48d%2D4a9e%2D9a35%2Dcb2aadba2f47.%5Fci%2D4081.2.1%2Da%2Dab3ee36414?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.383 INFO Fetch successful Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.383 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:28:16.426946 coreos-metadata[1656]: Dec 13 01:28:16.404 INFO Fetch successful Dec 13 01:28:16.430039 extend-filesystems[1661]: Old size kept for /dev/sda9 Dec 13 01:28:16.430039 extend-filesystems[1661]: Found sr0 Dec 13 01:28:16.304694 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:28:16.317409 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:28:16.460823 update_engine[1687]: I20241213 01:28:16.398397 1687 main.cc:92] Flatcar Update Engine starting Dec 13 01:28:16.460823 update_engine[1687]: I20241213 01:28:16.413604 1687 update_check_scheduler.cc:74] Next update check in 9m23s Dec 13 01:28:16.317926 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:28:16.330180 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:28:16.472014 jq[1692]: true Dec 13 01:28:16.343409 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:28:16.352876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:28:16.363736 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:28:16.379875 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:28:16.380040 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:28:16.385108 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:28:16.385255 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:28:16.434553 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:28:16.434704 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:28:16.443725 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:28:16.455724 systemd-logind[1680]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 01:28:16.456839 systemd-logind[1680]: New seat seat0. Dec 13 01:28:16.465693 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:28:16.473813 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:28:16.473972 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:28:16.509795 (ntainerd)[1711]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:28:16.511759 dbus-daemon[1657]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:28:16.510881 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:28:16.510927 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:28:16.526178 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:28:16.526207 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:28:16.535618 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:28:16.541332 jq[1710]: true Dec 13 01:28:16.544569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:28:16.559960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:28:16.560671 tar[1701]: linux-arm64/helm Dec 13 01:28:16.573741 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:28:16.672516 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1706) Dec 13 01:28:16.688167 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:28:16.676433 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:28:16.694546 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:28:16.885313 locksmithd[1733]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:28:16.889609 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:28:16.923636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:28:16.940466 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:28:16.952673 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:28:16.970790 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:28:16.972524 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:28:16.990864 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:28:17.026676 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:28:17.036645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:28:17.056845 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:28:17.073834 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:28:17.081931 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:28:17.107615 tar[1701]: linux-arm64/LICENSE Dec 13 01:28:17.107819 tar[1701]: linux-arm64/README.md Dec 13 01:28:17.122552 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:28:17.164405 containerd[1711]: time="2024-12-13T01:28:17.164289120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:28:17.189656 containerd[1711]: time="2024-12-13T01:28:17.189599960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.192088 containerd[1711]: time="2024-12-13T01:28:17.192046880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.192181 containerd[1711]: time="2024-12-13T01:28:17.192167840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:28:17.192235 containerd[1711]: time="2024-12-13T01:28:17.192223280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.192426 containerd[1711]: time="2024-12-13T01:28:17.192409040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:28:17.192525 containerd[1711]: time="2024-12-13T01:28:17.192510160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.192684 containerd[1711]: time="2024-12-13T01:28:17.192660720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.192748 containerd[1711]: time="2024-12-13T01:28:17.192736000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.192986 containerd[1711]: time="2024-12-13T01:28:17.192964920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193069 containerd[1711]: time="2024-12-13T01:28:17.193055400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193173 containerd[1711]: time="2024-12-13T01:28:17.193157280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193226 containerd[1711]: time="2024-12-13T01:28:17.193214240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193370 containerd[1711]: time="2024-12-13T01:28:17.193354880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193705 containerd[1711]: time="2024-12-13T01:28:17.193686200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193903 containerd[1711]: time="2024-12-13T01:28:17.193885480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.193961 containerd[1711]: time="2024-12-13T01:28:17.193950040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:28:17.194143 containerd[1711]: time="2024-12-13T01:28:17.194073200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:28:17.194143 containerd[1711]: time="2024-12-13T01:28:17.194115040Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:28:17.243785 containerd[1711]: time="2024-12-13T01:28:17.243691080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:28:17.243785 containerd[1711]: time="2024-12-13T01:28:17.243762600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:28:17.243785 containerd[1711]: time="2024-12-13T01:28:17.243782000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:28:17.243785 containerd[1711]: time="2024-12-13T01:28:17.243798520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.243825000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244015720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244308520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244450600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244467960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244481280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244517360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244533200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244547320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244561720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244576640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244589760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244602320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246506 containerd[1711]: time="2024-12-13T01:28:17.244613720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244634440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244652320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244665840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244680080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244692920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244706080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244718960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244732760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244745160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244760560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244771840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244783000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244795600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244815000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:28:17.246752 containerd[1711]: time="2024-12-13T01:28:17.244837640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244850360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244861160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244913600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244932840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244943920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244957320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244967400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244978880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.244989120Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:28:17.246997 containerd[1711]: time="2024-12-13T01:28:17.245005480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.245298360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.245355440Z" level=info msg="Connect containerd service" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.245380120Z" level=info msg="using legacy CRI server" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.245386240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.245466080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246038800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246292240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246302960Z" level=info msg="Start subscribing containerd event" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246375440Z" level=info msg="Start recovering state" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246453640Z" level=info msg="Start event monitor" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246465520Z" level=info msg="Start snapshots syncer" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246474960Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:28:17.247171 containerd[1711]: time="2024-12-13T01:28:17.246482640Z" level=info msg="Start streaming server" Dec 13 01:28:17.247487 containerd[1711]: time="2024-12-13T01:28:17.246333280Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:28:17.247839 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:28:17.255310 containerd[1711]: time="2024-12-13T01:28:17.254530560Z" level=info msg="containerd successfully booted in 0.090981s" Dec 13 01:28:17.393923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:17.401009 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:28:17.401828 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:17.408630 systemd[1]: Startup finished in 639ms (kernel) + 10.555s (initrd) + 8.859s (userspace) = 20.054s. Dec 13 01:28:17.614014 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:17.615440 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:17.623460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:28:17.634594 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:28:17.640567 systemd-logind[1680]: New session 1 of user core. Dec 13 01:28:17.644550 systemd-logind[1680]: New session 2 of user core. Dec 13 01:28:17.650825 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:28:17.658763 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:28:17.662692 (systemd)[1825]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:28:17.777162 systemd[1825]: Queued start job for default target default.target. Dec 13 01:28:17.785375 systemd[1825]: Created slice app.slice - User Application Slice. Dec 13 01:28:17.785401 systemd[1825]: Reached target paths.target - Paths. Dec 13 01:28:17.785412 systemd[1825]: Reached target timers.target - Timers. Dec 13 01:28:17.787007 systemd[1825]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:28:17.804312 systemd[1825]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:28:17.804523 systemd[1825]: Reached target sockets.target - Sockets. Dec 13 01:28:17.804620 systemd[1825]: Reached target basic.target - Basic System. Dec 13 01:28:17.804728 systemd[1825]: Reached target default.target - Main User Target. Dec 13 01:28:17.804757 systemd[1825]: Startup finished in 135ms. Dec 13 01:28:17.805054 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:28:17.809669 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:28:17.810604 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:28:17.854662 kubelet[1814]: E1213 01:28:17.854560 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:17.857578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:17.857708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:18.482805 waagent[1799]: 2024-12-13T01:28:18.482708Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:28:18.488159 waagent[1799]: 2024-12-13T01:28:18.488073Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:28:18.492225 waagent[1799]: 2024-12-13T01:28:18.492163Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:28:18.497623 waagent[1799]: 2024-12-13T01:28:18.497556Z INFO Daemon Daemon Run daemon Dec 13 01:28:18.502023 waagent[1799]: 2024-12-13T01:28:18.501964Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:28:18.510874 waagent[1799]: 2024-12-13T01:28:18.510811Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:28:18.516131 waagent[1799]: 2024-12-13T01:28:18.516085Z INFO Daemon Daemon Activate resource disk Dec 13 01:28:18.520318 waagent[1799]: 2024-12-13T01:28:18.520270Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:28:18.530915 waagent[1799]: 2024-12-13T01:28:18.530865Z INFO Daemon Daemon Found device: None Dec 13 01:28:18.535199 waagent[1799]: 2024-12-13T01:28:18.535155Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:28:18.542687 waagent[1799]: 2024-12-13T01:28:18.542644Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:28:18.554478 waagent[1799]: 2024-12-13T01:28:18.554429Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:28:18.559683 waagent[1799]: 2024-12-13T01:28:18.559642Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:28:18.570668 waagent[1799]: 2024-12-13T01:28:18.570608Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:28:18.583246 waagent[1799]: 2024-12-13T01:28:18.583191Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:28:18.591842 waagent[1799]: 2024-12-13T01:28:18.591793Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:28:18.596461 waagent[1799]: 2024-12-13T01:28:18.596417Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:28:18.723522 waagent[1799]: 2024-12-13T01:28:18.721767Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:28:18.741795 waagent[1799]: 2024-12-13T01:28:18.741672Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:28:18.742130 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:28:18.746519 waagent[1799]: 2024-12-13T01:28:18.746438Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:28:18.752383 waagent[1799]: 2024-12-13T01:28:18.752332Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:28:18.759448 waagent[1799]: 2024-12-13T01:28:18.759397Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:28:18.764248 waagent[1799]: 2024-12-13T01:28:18.764199Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:28:18.769149 waagent[1799]: 2024-12-13T01:28:18.769103Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:28:18.806110 waagent[1799]: 2024-12-13T01:28:18.806059Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:28:18.812452 waagent[1799]: 2024-12-13T01:28:18.812420Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:28:18.817739 waagent[1799]: 2024-12-13T01:28:18.817693Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:28:18.994390 waagent[1799]: 2024-12-13T01:28:18.994234Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:28:19.000886 waagent[1799]: 2024-12-13T01:28:19.000815Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:28:19.010677 waagent[1799]: 2024-12-13T01:28:19.010623Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:28:19.030157 waagent[1799]: 2024-12-13T01:28:19.030106Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:28:19.035377 waagent[1799]: 2024-12-13T01:28:19.035331Z INFO Daemon Dec 13 01:28:19.038032 waagent[1799]: 2024-12-13T01:28:19.037981Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e6f9ce1b-0271-4361-b114-5509b9cd3015 eTag: 4811117408178066216 source: Fabric] Dec 13 01:28:19.048197 waagent[1799]: 2024-12-13T01:28:19.048150Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:28:19.055918 waagent[1799]: 2024-12-13T01:28:19.055871Z INFO Daemon Dec 13 01:28:19.058543 waagent[1799]: 2024-12-13T01:28:19.058499Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:28:19.069654 waagent[1799]: 2024-12-13T01:28:19.069615Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:28:19.146001 waagent[1799]: 2024-12-13T01:28:19.145916Z INFO Daemon Downloaded certificate {'thumbprint': '7BE33418FEC5405DFB0DB3647DDC30D0558EE985', 'hasPrivateKey': True} Dec 13 01:28:19.155180 waagent[1799]: 2024-12-13T01:28:19.155128Z INFO Daemon Downloaded certificate {'thumbprint': '1D3E0D542965283302B541E905463ABE28D70D30', 'hasPrivateKey': False} Dec 13 01:28:19.165070 waagent[1799]: 2024-12-13T01:28:19.165020Z INFO Daemon Fetch goal state completed Dec 13 01:28:19.176265 waagent[1799]: 2024-12-13T01:28:19.176221Z INFO Daemon Daemon Starting provisioning Dec 13 01:28:19.181161 waagent[1799]: 2024-12-13T01:28:19.181114Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:28:19.185819 waagent[1799]: 2024-12-13T01:28:19.185777Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-ab3ee36414] Dec 13 01:28:19.213522 waagent[1799]: 2024-12-13T01:28:19.212895Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-ab3ee36414] Dec 13 01:28:19.219089 waagent[1799]: 2024-12-13T01:28:19.219022Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:28:19.225185 waagent[1799]: 2024-12-13T01:28:19.225131Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:28:19.261638 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:19.261646 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:19.261689 systemd-networkd[1329]: eth0: DHCP lease lost Dec 13 01:28:19.263108 waagent[1799]: 2024-12-13T01:28:19.263029Z INFO Daemon Daemon Create user account if not exists Dec 13 01:28:19.268711 waagent[1799]: 2024-12-13T01:28:19.268645Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:28:19.269562 systemd-networkd[1329]: eth0: DHCPv6 lease lost Dec 13 01:28:19.274454 waagent[1799]: 2024-12-13T01:28:19.274388Z INFO Daemon Daemon Configure sudoer Dec 13 01:28:19.278941 waagent[1799]: 2024-12-13T01:28:19.278856Z INFO Daemon Daemon Configure sshd Dec 13 01:28:19.283104 waagent[1799]: 2024-12-13T01:28:19.283048Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:28:19.295776 waagent[1799]: 2024-12-13T01:28:19.295466Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:28:19.305565 systemd-networkd[1329]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:28:20.385313 waagent[1799]: 2024-12-13T01:28:20.385263Z INFO Daemon Daemon Provisioning complete Dec 13 01:28:20.404082 waagent[1799]: 2024-12-13T01:28:20.404033Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:28:20.409791 waagent[1799]: 2024-12-13T01:28:20.409739Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:28:20.418824 waagent[1799]: 2024-12-13T01:28:20.418773Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:28:20.544253 waagent[1883]: 2024-12-13T01:28:20.544182Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:28:20.545137 waagent[1883]: 2024-12-13T01:28:20.544658Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:28:20.545137 waagent[1883]: 2024-12-13T01:28:20.544729Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:28:20.605524 waagent[1883]: 2024-12-13T01:28:20.603481Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:28:20.605524 waagent[1883]: 2024-12-13T01:28:20.603723Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:20.605524 waagent[1883]: 2024-12-13T01:28:20.603782Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:20.612077 waagent[1883]: 2024-12-13T01:28:20.612015Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:28:20.619514 waagent[1883]: 2024-12-13T01:28:20.619462Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:28:20.620071 waagent[1883]: 2024-12-13T01:28:20.620032Z INFO ExtHandler Dec 13 01:28:20.620212 waagent[1883]: 2024-12-13T01:28:20.620180Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cc316abe-5f18-45de-97d5-579a401d2b04 eTag: 4811117408178066216 source: Fabric] Dec 13 01:28:20.620634 waagent[1883]: 2024-12-13T01:28:20.620593Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:28:20.621278 waagent[1883]: 2024-12-13T01:28:20.621235Z INFO ExtHandler Dec 13 01:28:20.621411 waagent[1883]: 2024-12-13T01:28:20.621381Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:28:20.625352 waagent[1883]: 2024-12-13T01:28:20.625321Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:28:20.706544 waagent[1883]: 2024-12-13T01:28:20.705860Z INFO ExtHandler Downloaded certificate {'thumbprint': '7BE33418FEC5405DFB0DB3647DDC30D0558EE985', 'hasPrivateKey': True} Dec 13 01:28:20.706544 waagent[1883]: 2024-12-13T01:28:20.706291Z INFO ExtHandler Downloaded certificate {'thumbprint': '1D3E0D542965283302B541E905463ABE28D70D30', 'hasPrivateKey': False} Dec 13 01:28:20.706767 waagent[1883]: 2024-12-13T01:28:20.706718Z INFO ExtHandler Fetch goal state completed Dec 13 01:28:20.724005 waagent[1883]: 2024-12-13T01:28:20.723946Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1883 Dec 13 01:28:20.724158 waagent[1883]: 2024-12-13T01:28:20.724123Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:28:20.725742 waagent[1883]: 2024-12-13T01:28:20.725698Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:28:20.726112 waagent[1883]: 2024-12-13T01:28:20.726076Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:28:20.751714 waagent[1883]: 2024-12-13T01:28:20.751668Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:28:20.751907 waagent[1883]: 2024-12-13T01:28:20.751868Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:28:20.758029 waagent[1883]: 2024-12-13T01:28:20.757578Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:28:20.763583 systemd[1]: Reloading requested from client PID 1898 ('systemctl') (unit waagent.service)... Dec 13 01:28:20.763596 systemd[1]: Reloading... Dec 13 01:28:20.841570 zram_generator::config[1940]: No configuration found. Dec 13 01:28:20.926843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:21.001370 systemd[1]: Reloading finished in 237 ms. Dec 13 01:28:21.021966 waagent[1883]: 2024-12-13T01:28:21.021612Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:28:21.028829 systemd[1]: Reloading requested from client PID 1986 ('systemctl') (unit waagent.service)... Dec 13 01:28:21.028842 systemd[1]: Reloading... Dec 13 01:28:21.096570 zram_generator::config[2021]: No configuration found. Dec 13 01:28:21.195654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:21.270000 systemd[1]: Reloading finished in 240 ms. Dec 13 01:28:21.293088 waagent[1883]: 2024-12-13T01:28:21.292988Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:28:21.293196 waagent[1883]: 2024-12-13T01:28:21.293157Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:28:22.610711 waagent[1883]: 2024-12-13T01:28:22.609532Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:28:22.610711 waagent[1883]: 2024-12-13T01:28:22.610118Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:28:22.611092 waagent[1883]: 2024-12-13T01:28:22.610913Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:22.611092 waagent[1883]: 2024-12-13T01:28:22.610994Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:22.611229 waagent[1883]: 2024-12-13T01:28:22.611180Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:28:22.611337 waagent[1883]: 2024-12-13T01:28:22.611283Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:28:22.611473 waagent[1883]: 2024-12-13T01:28:22.611425Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:28:22.611473 waagent[1883]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:28:22.611473 waagent[1883]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:28:22.611473 waagent[1883]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:28:22.611473 waagent[1883]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:22.611473 waagent[1883]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:22.611473 waagent[1883]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:22.612086 waagent[1883]: 2024-12-13T01:28:22.612035Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:22.612234 waagent[1883]: 2024-12-13T01:28:22.612189Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:28:22.612608 waagent[1883]: 2024-12-13T01:28:22.612547Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:28:22.612737 waagent[1883]: 2024-12-13T01:28:22.612692Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:28:22.612855 waagent[1883]: 2024-12-13T01:28:22.612824Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:22.613263 waagent[1883]: 2024-12-13T01:28:22.613205Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:28:22.613350 waagent[1883]: 2024-12-13T01:28:22.613313Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:28:22.613477 waagent[1883]: 2024-12-13T01:28:22.613435Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:28:22.613709 waagent[1883]: 2024-12-13T01:28:22.613662Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:28:22.614371 waagent[1883]: 2024-12-13T01:28:22.614326Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:28:22.615506 waagent[1883]: 2024-12-13T01:28:22.615445Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:28:22.625878 waagent[1883]: 2024-12-13T01:28:22.625832Z INFO ExtHandler ExtHandler Dec 13 01:28:22.626070 waagent[1883]: 2024-12-13T01:28:22.626033Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4e4478c5-0706-4682-a9bd-1dfbd7ff934f correlation 594dac99-d768-4c8f-a58d-aa59ac1b0efb created: 2024-12-13T01:27:19.263824Z] Dec 13 01:28:22.626540 waagent[1883]: 2024-12-13T01:28:22.626475Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:28:22.627180 waagent[1883]: 2024-12-13T01:28:22.627143Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 13 01:28:22.657709 waagent[1883]: 2024-12-13T01:28:22.657643Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:28:22.657709 waagent[1883]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:28:22.657709 waagent[1883]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:28:22.657709 waagent[1883]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d0:35 brd ff:ff:ff:ff:ff:ff Dec 13 01:28:22.657709 waagent[1883]: 3: enP63763s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d0:35 brd ff:ff:ff:ff:ff:ff\ altname enP63763p0s2 Dec 13 01:28:22.657709 waagent[1883]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:28:22.657709 waagent[1883]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:28:22.657709 waagent[1883]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:28:22.657709 waagent[1883]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:28:22.657709 waagent[1883]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:28:22.657709 waagent[1883]: 2: eth0 inet6 fe80::222:48ff:fe7d:d035/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:28:22.657709 waagent[1883]: 3: enP63763s1 inet6 fe80::222:48ff:fe7d:d035/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:28:22.663123 waagent[1883]: 2024-12-13T01:28:22.663061Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: ACF3BE9E-BDAD-4A16-9358-3174BE96BF99;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:28:22.697531 waagent[1883]: 2024-12-13T01:28:22.696729Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:28:22.697531 waagent[1883]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.697531 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.697531 waagent[1883]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.697531 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.697531 waagent[1883]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.697531 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.697531 waagent[1883]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:28:22.697531 waagent[1883]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:28:22.697531 waagent[1883]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:28:22.699513 waagent[1883]: 2024-12-13T01:28:22.699448Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:28:22.699513 waagent[1883]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.699513 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.699513 waagent[1883]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.699513 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.699513 waagent[1883]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:22.699513 waagent[1883]: pkts bytes target prot opt in out source destination Dec 13 01:28:22.699513 waagent[1883]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:28:22.699513 waagent[1883]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:28:22.699513 waagent[1883]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:28:22.699972 waagent[1883]: 2024-12-13T01:28:22.699940Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:28:27.888892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:28:27.895669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:27.996661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:27.999402 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:28.037904 kubelet[2114]: E1213 01:28:28.037820 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:28.039987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:28.040115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:38.139085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:28:38.145659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:38.434212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:38.437926 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:38.471715 kubelet[2129]: E1213 01:28:38.471621 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:38.473996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:38.474137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:40.016517 chronyd[1663]: Selected source PHC0 Dec 13 01:28:48.638869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:28:48.649667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:48.891473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:48.895098 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:48.928210 kubelet[2145]: E1213 01:28:48.928156 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:48.930567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:48.930708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:55.951441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:28:55.952576 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:59252.service - OpenSSH per-connection server daemon (10.200.16.10:59252). Dec 13 01:28:56.472608 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 59252 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:56.473875 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:56.478428 systemd-logind[1680]: New session 3 of user core. Dec 13 01:28:56.484643 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:28:56.859731 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:59266.service - OpenSSH per-connection server daemon (10.200.16.10:59266). Dec 13 01:28:57.275174 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 59266 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:57.276431 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:57.280957 systemd-logind[1680]: New session 4 of user core. Dec 13 01:28:57.286700 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:28:57.578306 sshd[2158]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:57.581405 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:59266.service: Deactivated successfully. Dec 13 01:28:57.582954 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:28:57.584601 systemd-logind[1680]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:28:57.585394 systemd-logind[1680]: Removed session 4. Dec 13 01:28:57.656182 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:59272.service - OpenSSH per-connection server daemon (10.200.16.10:59272). Dec 13 01:28:58.086880 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 59272 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:58.088158 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:58.091640 systemd-logind[1680]: New session 5 of user core. Dec 13 01:28:58.101701 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:28:58.399700 sshd[2165]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:58.402604 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:59272.service: Deactivated successfully. Dec 13 01:28:58.404003 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:28:58.405955 systemd-logind[1680]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:28:58.406862 systemd-logind[1680]: Removed session 5. Dec 13 01:28:58.474635 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:59288.service - OpenSSH per-connection server daemon (10.200.16.10:59288). Dec 13 01:28:58.901055 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 59288 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:58.902331 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:58.906764 systemd-logind[1680]: New session 6 of user core. Dec 13 01:28:58.912646 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:28:59.138745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:28:59.149749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:59.218680 sshd[2172]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:59.221880 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:59288.service: Deactivated successfully. Dec 13 01:28:59.223231 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:28:59.224905 systemd-logind[1680]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:28:59.227564 systemd-logind[1680]: Removed session 6. Dec 13 01:28:59.294814 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:42626.service - OpenSSH per-connection server daemon (10.200.16.10:42626). Dec 13 01:28:59.299657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:59.304159 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:59.341598 kubelet[2188]: E1213 01:28:59.341544 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:59.344030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:59.344178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:59.708638 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 42626 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:59.709904 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:59.713974 systemd-logind[1680]: New session 7 of user core. Dec 13 01:28:59.725685 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:29:00.020869 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:29:00.021121 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:00.047260 sudo[2197]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:00.113125 sshd[2186]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:00.116447 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:42626.service: Deactivated successfully. Dec 13 01:29:00.117871 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:29:00.118473 systemd-logind[1680]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:29:00.119375 systemd-logind[1680]: Removed session 7. Dec 13 01:29:00.188830 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:42640.service - OpenSSH per-connection server daemon (10.200.16.10:42640). Dec 13 01:29:00.615672 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 42640 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:00.616977 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:00.621536 systemd-logind[1680]: New session 8 of user core. Dec 13 01:29:00.628692 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:29:00.860579 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:29:00.861131 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:00.863909 sudo[2206]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:00.868229 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:29:00.868836 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:00.881985 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:00.882915 auditctl[2209]: No rules Dec 13 01:29:00.883363 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:29:00.883538 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:00.886973 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:00.907487 augenrules[2227]: No rules Dec 13 01:29:00.908932 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:00.910709 sudo[2205]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:00.979085 sshd[2202]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:00.981445 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:42640.service: Deactivated successfully. Dec 13 01:29:00.982981 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:29:00.984313 systemd-logind[1680]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:29:00.985471 systemd-logind[1680]: Removed session 8. Dec 13 01:29:01.056378 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:42646.service - OpenSSH per-connection server daemon (10.200.16.10:42646). Dec 13 01:29:01.242334 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:29:01.355127 update_engine[1687]: I20241213 01:29:01.355074 1687 update_attempter.cc:509] Updating boot flags... Dec 13 01:29:01.421545 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2249) Dec 13 01:29:01.470782 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 42646 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:01.471851 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:01.478906 systemd-logind[1680]: New session 9 of user core. Dec 13 01:29:01.480761 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:29:01.509533 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2254) Dec 13 01:29:01.706679 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:29:01.706939 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:02.761735 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:29:02.761854 (dockerd)[2319]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:29:03.361195 dockerd[2319]: time="2024-12-13T01:29:03.360915980Z" level=info msg="Starting up" Dec 13 01:29:03.884471 dockerd[2319]: time="2024-12-13T01:29:03.884392545Z" level=info msg="Loading containers: start." Dec 13 01:29:04.015517 kernel: Initializing XFRM netlink socket Dec 13 01:29:04.122303 systemd-networkd[1329]: docker0: Link UP Dec 13 01:29:04.148720 dockerd[2319]: time="2024-12-13T01:29:04.148675011Z" level=info msg="Loading containers: done." Dec 13 01:29:04.159175 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1908187800-merged.mount: Deactivated successfully. Dec 13 01:29:04.176993 dockerd[2319]: time="2024-12-13T01:29:04.176947705Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:29:04.177091 dockerd[2319]: time="2024-12-13T01:29:04.177063545Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:29:04.177214 dockerd[2319]: time="2024-12-13T01:29:04.177182505Z" level=info msg="Daemon has completed initialization" Dec 13 01:29:04.236028 dockerd[2319]: time="2024-12-13T01:29:04.235586898Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:29:04.235753 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:29:04.993238 containerd[1711]: time="2024-12-13T01:29:04.993180713Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:29:05.791818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558215966.mount: Deactivated successfully. Dec 13 01:29:07.359537 containerd[1711]: time="2024-12-13T01:29:07.358979017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.361264 containerd[1711]: time="2024-12-13T01:29:07.361194541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Dec 13 01:29:07.363880 containerd[1711]: time="2024-12-13T01:29:07.363837906Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.368755 containerd[1711]: time="2024-12-13T01:29:07.368694955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.370478 containerd[1711]: time="2024-12-13T01:29:07.369812918Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.376592244s" Dec 13 01:29:07.370478 containerd[1711]: time="2024-12-13T01:29:07.369847918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 01:29:07.370478 containerd[1711]: time="2024-12-13T01:29:07.370336999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:29:09.371269 containerd[1711]: time="2024-12-13T01:29:09.371219522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:09.374191 containerd[1711]: time="2024-12-13T01:29:09.374163647Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Dec 13 01:29:09.378809 containerd[1711]: time="2024-12-13T01:29:09.378773736Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:09.385016 containerd[1711]: time="2024-12-13T01:29:09.384971788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:09.386088 containerd[1711]: time="2024-12-13T01:29:09.385958310Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.015594351s" Dec 13 01:29:09.386088 containerd[1711]: time="2024-12-13T01:29:09.385998070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 01:29:09.386729 containerd[1711]: time="2024-12-13T01:29:09.386486511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:29:09.390025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:29:09.396737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:09.479600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:09.483514 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:09.523249 kubelet[2518]: E1213 01:29:09.523170 2518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:09.525157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:09.525276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:11.458207 containerd[1711]: time="2024-12-13T01:29:11.458154410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:11.460506 containerd[1711]: time="2024-12-13T01:29:11.460462054Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Dec 13 01:29:11.464194 containerd[1711]: time="2024-12-13T01:29:11.464146501Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:11.469214 containerd[1711]: time="2024-12-13T01:29:11.469170671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:11.470368 containerd[1711]: time="2024-12-13T01:29:11.470207473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 2.083675962s" Dec 13 01:29:11.470368 containerd[1711]: time="2024-12-13T01:29:11.470241753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 01:29:11.470865 containerd[1711]: time="2024-12-13T01:29:11.470713394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:29:12.525807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481153990.mount: Deactivated successfully. Dec 13 01:29:13.150218 containerd[1711]: time="2024-12-13T01:29:13.150174236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:13.152232 containerd[1711]: time="2024-12-13T01:29:13.152205880Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Dec 13 01:29:13.155538 containerd[1711]: time="2024-12-13T01:29:13.155516047Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:13.159676 containerd[1711]: time="2024-12-13T01:29:13.159624176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:13.160313 containerd[1711]: time="2024-12-13T01:29:13.160276538Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.689532984s" Dec 13 01:29:13.160363 containerd[1711]: time="2024-12-13T01:29:13.160315378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:29:13.161077 containerd[1711]: time="2024-12-13T01:29:13.160889179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:29:13.803403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593874219.mount: Deactivated successfully. Dec 13 01:29:14.712532 containerd[1711]: time="2024-12-13T01:29:14.712179821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:14.716066 containerd[1711]: time="2024-12-13T01:29:14.716031149Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:29:14.722022 containerd[1711]: time="2024-12-13T01:29:14.721971722Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:14.728208 containerd[1711]: time="2024-12-13T01:29:14.728158376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:14.729451 containerd[1711]: time="2024-12-13T01:29:14.729312099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.56838856s" Dec 13 01:29:14.729451 containerd[1711]: time="2024-12-13T01:29:14.729351219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:29:14.730146 containerd[1711]: time="2024-12-13T01:29:14.729969900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:29:15.930215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89331945.mount: Deactivated successfully. Dec 13 01:29:15.953344 containerd[1711]: time="2024-12-13T01:29:15.953296143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:15.956502 containerd[1711]: time="2024-12-13T01:29:15.956329189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 13 01:29:15.959789 containerd[1711]: time="2024-12-13T01:29:15.959746477Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:15.966879 containerd[1711]: time="2024-12-13T01:29:15.966828452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:15.967698 containerd[1711]: time="2024-12-13T01:29:15.967575134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.237575154s" Dec 13 01:29:15.967698 containerd[1711]: time="2024-12-13T01:29:15.967605854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 01:29:15.968350 containerd[1711]: time="2024-12-13T01:29:15.968189215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:29:16.662951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723806591.mount: Deactivated successfully. Dec 13 01:29:19.638969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:29:19.648417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:19.744707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:19.756795 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:19.788239 kubelet[2647]: E1213 01:29:19.788166 2647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:19.790955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:19.791094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:20.487986 containerd[1711]: time="2024-12-13T01:29:20.487925147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:20.491021 containerd[1711]: time="2024-12-13T01:29:20.490765234Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Dec 13 01:29:20.494851 containerd[1711]: time="2024-12-13T01:29:20.494113082Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:20.500138 containerd[1711]: time="2024-12-13T01:29:20.500100816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:20.502194 containerd[1711]: time="2024-12-13T01:29:20.501107538Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.532887723s" Dec 13 01:29:20.502194 containerd[1711]: time="2024-12-13T01:29:20.501162138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 01:29:26.854351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:26.865812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:26.885467 systemd[1]: Reloading requested from client PID 2681 ('systemctl') (unit session-9.scope)... Dec 13 01:29:26.885487 systemd[1]: Reloading... Dec 13 01:29:26.992561 zram_generator::config[2727]: No configuration found. Dec 13 01:29:27.085351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:27.158535 systemd[1]: Reloading finished in 272 ms. Dec 13 01:29:27.204394 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:27.205576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:27.211742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:27.398080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:27.410770 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:27.441292 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:27.441292 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:27.441292 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:27.441697 kubelet[2789]: I1213 01:29:27.441649 2789 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:28.054778 kubelet[2789]: I1213 01:29:28.054263 2789 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:29:28.054778 kubelet[2789]: I1213 01:29:28.054686 2789 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:28.055428 kubelet[2789]: I1213 01:29:28.055402 2789 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:29:28.073553 kubelet[2789]: E1213 01:29:28.073484 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:28.074728 kubelet[2789]: I1213 01:29:28.074542 2789 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:28.079863 kubelet[2789]: E1213 01:29:28.079829 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:29:28.080021 kubelet[2789]: I1213 01:29:28.080008 2789 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:29:28.083741 kubelet[2789]: I1213 01:29:28.083717 2789 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:28.083847 kubelet[2789]: I1213 01:29:28.083827 2789 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:29:28.083983 kubelet[2789]: I1213 01:29:28.083958 2789 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:28.084140 kubelet[2789]: I1213 01:29:28.083982 2789 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-a-ab3ee36414","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:29:28.084228 kubelet[2789]: I1213 01:29:28.084148 2789 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:28.084228 kubelet[2789]: I1213 01:29:28.084157 2789 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:29:28.084270 kubelet[2789]: I1213 01:29:28.084266 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:28.085765 kubelet[2789]: I1213 01:29:28.085738 2789 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:29:28.085823 kubelet[2789]: I1213 01:29:28.085771 2789 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:28.085823 kubelet[2789]: I1213 01:29:28.085810 2789 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:29:28.085823 kubelet[2789]: I1213 01:29:28.085821 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:28.089922 kubelet[2789]: W1213 01:29:28.089773 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-ab3ee36414&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:28.089922 kubelet[2789]: E1213 01:29:28.089825 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-ab3ee36414&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:28.089922 kubelet[2789]: W1213 01:29:28.090122 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:28.089922 kubelet[2789]: E1213 01:29:28.090155 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:28.089922 kubelet[2789]: I1213 01:29:28.090225 2789 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:28.091901 kubelet[2789]: I1213 01:29:28.091834 2789 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:28.092348 kubelet[2789]: W1213 01:29:28.092336 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:29:28.094858 kubelet[2789]: I1213 01:29:28.094833 2789 server.go:1269] "Started kubelet" Dec 13 01:29:28.095529 kubelet[2789]: I1213 01:29:28.095478 2789 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:28.096971 kubelet[2789]: I1213 01:29:28.096953 2789 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:29:28.097754 kubelet[2789]: I1213 01:29:28.097684 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:28.098012 kubelet[2789]: I1213 01:29:28.097986 2789 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:28.099175 kubelet[2789]: E1213 01:29:28.098146 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-ab3ee36414.18109859d2dc63b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-ab3ee36414,UID:ci-4081.2.1-a-ab3ee36414,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-ab3ee36414,},FirstTimestamp:2024-12-13 01:29:28.094811061 +0000 UTC m=+0.681257795,LastTimestamp:2024-12-13 01:29:28.094811061 +0000 UTC m=+0.681257795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-ab3ee36414,}" Dec 13 01:29:28.102254 kubelet[2789]: I1213 01:29:28.102224 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:28.102405 kubelet[2789]: E1213 01:29:28.102390 2789 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:28.102771 kubelet[2789]: I1213 01:29:28.102756 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:29:28.103509 kubelet[2789]: I1213 01:29:28.103479 2789 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:29:28.103728 kubelet[2789]: E1213 01:29:28.103701 2789 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-ab3ee36414\" not found" Dec 13 01:29:28.104175 kubelet[2789]: I1213 01:29:28.104150 2789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:29:28.104229 kubelet[2789]: I1213 01:29:28.104213 2789 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:28.104694 kubelet[2789]: W1213 01:29:28.104645 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:28.104827 kubelet[2789]: E1213 01:29:28.104794 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:28.104953 kubelet[2789]: E1213 01:29:28.104932 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-ab3ee36414?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Dec 13 01:29:28.105568 kubelet[2789]: I1213 01:29:28.105535 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:28.107688 kubelet[2789]: I1213 01:29:28.107658 2789 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:28.107688 kubelet[2789]: I1213 01:29:28.107679 2789 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:28.134438 kubelet[2789]: I1213 01:29:28.134383 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:28.136613 kubelet[2789]: I1213 01:29:28.136581 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:28.136613 kubelet[2789]: I1213 01:29:28.136614 2789 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:28.136833 kubelet[2789]: I1213 01:29:28.136631 2789 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:29:28.136833 kubelet[2789]: E1213 01:29:28.136672 2789 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:28.137574 kubelet[2789]: W1213 01:29:28.137518 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:28.137656 kubelet[2789]: E1213 01:29:28.137585 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:28.204835 kubelet[2789]: E1213 01:29:28.204780 2789 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-ab3ee36414\" not found" Dec 13 01:29:28.237115 kubelet[2789]: E1213 01:29:28.237090 2789 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:29:28.239794 kubelet[2789]: I1213 01:29:28.239766 2789 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:28.239794 kubelet[2789]: I1213 01:29:28.239779 2789 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:28.239794 kubelet[2789]: I1213 01:29:28.239795 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:28.244918 kubelet[2789]: I1213 01:29:28.244891 2789 policy_none.go:49] "None policy: Start" Dec 13 01:29:28.245624 kubelet[2789]: I1213 01:29:28.245601 2789 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:28.245624 kubelet[2789]: I1213 01:29:28.245627 2789 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:28.254904 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:29:28.262993 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:29:28.266525 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:29:28.277366 kubelet[2789]: I1213 01:29:28.277342 2789 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:28.277708 kubelet[2789]: I1213 01:29:28.277693 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:29:28.277810 kubelet[2789]: I1213 01:29:28.277777 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:28.278165 kubelet[2789]: I1213 01:29:28.278147 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:28.281143 kubelet[2789]: E1213 01:29:28.281096 2789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-ab3ee36414\" not found" Dec 13 01:29:28.307742 kubelet[2789]: E1213 01:29:28.306126 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-ab3ee36414?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Dec 13 01:29:28.380046 kubelet[2789]: I1213 01:29:28.379977 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.380426 kubelet[2789]: E1213 01:29:28.380402 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.448238 systemd[1]: Created slice kubepods-burstable-pode7e60c39da16215647f47c5ccb6d4fcb.slice - libcontainer container kubepods-burstable-pode7e60c39da16215647f47c5ccb6d4fcb.slice. Dec 13 01:29:28.468606 systemd[1]: Created slice kubepods-burstable-pod45bc4eecec607b79f91a3515c18b443a.slice - libcontainer container kubepods-burstable-pod45bc4eecec607b79f91a3515c18b443a.slice. Dec 13 01:29:28.481703 systemd[1]: Created slice kubepods-burstable-pode8a23d3c3c2a3b8fd0ecca72a9755d31.slice - libcontainer container kubepods-burstable-pode8a23d3c3c2a3b8fd0ecca72a9755d31.slice. Dec 13 01:29:28.507870 kubelet[2789]: I1213 01:29:28.507661 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.507870 kubelet[2789]: I1213 01:29:28.507695 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.507870 kubelet[2789]: I1213 01:29:28.507713 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.507870 kubelet[2789]: I1213 01:29:28.507730 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.507870 kubelet[2789]: I1213 01:29:28.507745 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.508222 kubelet[2789]: I1213 01:29:28.507761 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.508222 kubelet[2789]: I1213 01:29:28.507775 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7e60c39da16215647f47c5ccb6d4fcb-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-ab3ee36414\" (UID: \"e7e60c39da16215647f47c5ccb6d4fcb\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.508222 kubelet[2789]: I1213 01:29:28.507788 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.508222 kubelet[2789]: I1213 01:29:28.507802 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.582691 kubelet[2789]: I1213 01:29:28.582588 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.582969 kubelet[2789]: E1213 01:29:28.582928 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.589391 kubelet[2789]: E1213 01:29:28.589286 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-ab3ee36414.18109859d2dc63b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-ab3ee36414,UID:ci-4081.2.1-a-ab3ee36414,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-ab3ee36414,},FirstTimestamp:2024-12-13 01:29:28.094811061 +0000 UTC m=+0.681257795,LastTimestamp:2024-12-13 01:29:28.094811061 +0000 UTC m=+0.681257795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-ab3ee36414,}" Dec 13 01:29:28.707355 kubelet[2789]: E1213 01:29:28.707307 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-ab3ee36414?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Dec 13 01:29:28.766422 containerd[1711]: time="2024-12-13T01:29:28.766340477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-ab3ee36414,Uid:e7e60c39da16215647f47c5ccb6d4fcb,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:28.780063 containerd[1711]: time="2024-12-13T01:29:28.779872390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-ab3ee36414,Uid:45bc4eecec607b79f91a3515c18b443a,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:28.783836 containerd[1711]: time="2024-12-13T01:29:28.783813279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-ab3ee36414,Uid:e8a23d3c3c2a3b8fd0ecca72a9755d31,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:28.984639 kubelet[2789]: I1213 01:29:28.984593 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:28.984936 kubelet[2789]: E1213 01:29:28.984913 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:29.019060 kubelet[2789]: W1213 01:29:29.018985 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:29.019060 kubelet[2789]: E1213 01:29:29.019027 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:29.102447 kubelet[2789]: W1213 01:29:29.102349 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:29.102447 kubelet[2789]: E1213 01:29:29.102412 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:29.155150 kubelet[2789]: W1213 01:29:29.155091 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-ab3ee36414&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:29.155265 kubelet[2789]: E1213 01:29:29.155163 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-ab3ee36414&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:29.354791 kubelet[2789]: W1213 01:29:29.354686 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Dec 13 01:29:29.354791 kubelet[2789]: E1213 01:29:29.354735 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:29.365355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392272877.mount: Deactivated successfully. Dec 13 01:29:29.396696 containerd[1711]: time="2024-12-13T01:29:29.396609434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:29.399675 containerd[1711]: time="2024-12-13T01:29:29.399640201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:29.402558 containerd[1711]: time="2024-12-13T01:29:29.402460128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:29:29.405586 containerd[1711]: time="2024-12-13T01:29:29.405548255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:29.408720 containerd[1711]: time="2024-12-13T01:29:29.408663983Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:29.412672 containerd[1711]: time="2024-12-13T01:29:29.412072911Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:29.414092 containerd[1711]: time="2024-12-13T01:29:29.413878835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:29.417820 containerd[1711]: time="2024-12-13T01:29:29.417778205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:29.418672 containerd[1711]: time="2024-12-13T01:29:29.418643207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.22301ms" Dec 13 01:29:29.420884 containerd[1711]: time="2024-12-13T01:29:29.420846372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 640.910982ms" Dec 13 01:29:29.421514 containerd[1711]: time="2024-12-13T01:29:29.421471254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 637.520655ms" Dec 13 01:29:29.507891 kubelet[2789]: E1213 01:29:29.507847 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-ab3ee36414?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Dec 13 01:29:29.786876 kubelet[2789]: I1213 01:29:29.786801 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:29.787223 kubelet[2789]: E1213 01:29:29.787198 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:29.989487 containerd[1711]: time="2024-12-13T01:29:29.989400220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:29.989487 containerd[1711]: time="2024-12-13T01:29:29.989447540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:29.990014 containerd[1711]: time="2024-12-13T01:29:29.989479100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:29.990092 containerd[1711]: time="2024-12-13T01:29:29.989919582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:29.993761 containerd[1711]: time="2024-12-13T01:29:29.993676031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:29.993761 containerd[1711]: time="2024-12-13T01:29:29.993730831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:29.994119 containerd[1711]: time="2024-12-13T01:29:29.993775271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:29.994231 containerd[1711]: time="2024-12-13T01:29:29.994079752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:29.996528 containerd[1711]: time="2024-12-13T01:29:29.996329877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:29.996528 containerd[1711]: time="2024-12-13T01:29:29.996366677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:29.996528 containerd[1711]: time="2024-12-13T01:29:29.996377277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:29.996528 containerd[1711]: time="2024-12-13T01:29:29.996436917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:30.023682 systemd[1]: Started cri-containerd-6e6915accedc5e95a6cbedb5b80c2e1d3dc06d930a62cf712ac282cf71aaf591.scope - libcontainer container 6e6915accedc5e95a6cbedb5b80c2e1d3dc06d930a62cf712ac282cf71aaf591. Dec 13 01:29:30.024710 systemd[1]: Started cri-containerd-bdebfc9fdcd3b83d20e6264005f0e8f9b7a2da84c5bd1a7e382d4992966d13ad.scope - libcontainer container bdebfc9fdcd3b83d20e6264005f0e8f9b7a2da84c5bd1a7e382d4992966d13ad. Dec 13 01:29:30.027035 systemd[1]: Started cri-containerd-cb76a3b0e9257118dfa0fcf72e0bb1e129e1a411edbbb8a023d2292f2cfd845d.scope - libcontainer container cb76a3b0e9257118dfa0fcf72e0bb1e129e1a411edbbb8a023d2292f2cfd845d. Dec 13 01:29:30.064573 containerd[1711]: time="2024-12-13T01:29:30.062851277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-ab3ee36414,Uid:45bc4eecec607b79f91a3515c18b443a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e6915accedc5e95a6cbedb5b80c2e1d3dc06d930a62cf712ac282cf71aaf591\"" Dec 13 01:29:30.069601 containerd[1711]: time="2024-12-13T01:29:30.069468693Z" level=info msg="CreateContainer within sandbox \"6e6915accedc5e95a6cbedb5b80c2e1d3dc06d930a62cf712ac282cf71aaf591\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:29:30.082468 containerd[1711]: time="2024-12-13T01:29:30.082438444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-ab3ee36414,Uid:e8a23d3c3c2a3b8fd0ecca72a9755d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdebfc9fdcd3b83d20e6264005f0e8f9b7a2da84c5bd1a7e382d4992966d13ad\"" Dec 13 01:29:30.085813 containerd[1711]: time="2024-12-13T01:29:30.085682692Z" level=info msg="CreateContainer within sandbox \"bdebfc9fdcd3b83d20e6264005f0e8f9b7a2da84c5bd1a7e382d4992966d13ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:29:30.089250 containerd[1711]: time="2024-12-13T01:29:30.088952020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-ab3ee36414,Uid:e7e60c39da16215647f47c5ccb6d4fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb76a3b0e9257118dfa0fcf72e0bb1e129e1a411edbbb8a023d2292f2cfd845d\"" Dec 13 01:29:30.091616 containerd[1711]: time="2024-12-13T01:29:30.091581346Z" level=info msg="CreateContainer within sandbox \"cb76a3b0e9257118dfa0fcf72e0bb1e129e1a411edbbb8a023d2292f2cfd845d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:29:30.132508 containerd[1711]: time="2024-12-13T01:29:30.132434724Z" level=info msg="CreateContainer within sandbox \"6e6915accedc5e95a6cbedb5b80c2e1d3dc06d930a62cf712ac282cf71aaf591\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07f1fc7abb57c7c24ba4c5c36504820c7bc041b60aeb6ed66f7bf6226379d64b\"" Dec 13 01:29:30.133262 containerd[1711]: time="2024-12-13T01:29:30.133234526Z" level=info msg="StartContainer for \"07f1fc7abb57c7c24ba4c5c36504820c7bc041b60aeb6ed66f7bf6226379d64b\"" Dec 13 01:29:30.155959 containerd[1711]: time="2024-12-13T01:29:30.155896101Z" level=info msg="CreateContainer within sandbox \"bdebfc9fdcd3b83d20e6264005f0e8f9b7a2da84c5bd1a7e382d4992966d13ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31d495d582398b1d04c1e7e0f212489382c518d7174724fe829101cc9ded9ee0\"" Dec 13 01:29:30.156427 containerd[1711]: time="2024-12-13T01:29:30.156408742Z" level=info msg="StartContainer for \"31d495d582398b1d04c1e7e0f212489382c518d7174724fe829101cc9ded9ee0\"" Dec 13 01:29:30.157017 containerd[1711]: time="2024-12-13T01:29:30.156943263Z" level=info msg="CreateContainer within sandbox \"cb76a3b0e9257118dfa0fcf72e0bb1e129e1a411edbbb8a023d2292f2cfd845d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8a7297eacfc9c2a11551838912243321762b986ef89637baefc95e31c068830\"" Dec 13 01:29:30.157524 containerd[1711]: time="2024-12-13T01:29:30.157269584Z" level=info msg="StartContainer for \"d8a7297eacfc9c2a11551838912243321762b986ef89637baefc95e31c068830\"" Dec 13 01:29:30.158879 systemd[1]: Started cri-containerd-07f1fc7abb57c7c24ba4c5c36504820c7bc041b60aeb6ed66f7bf6226379d64b.scope - libcontainer container 07f1fc7abb57c7c24ba4c5c36504820c7bc041b60aeb6ed66f7bf6226379d64b. Dec 13 01:29:30.172528 kubelet[2789]: E1213 01:29:30.172363 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:30.188765 systemd[1]: Started cri-containerd-d8a7297eacfc9c2a11551838912243321762b986ef89637baefc95e31c068830.scope - libcontainer container d8a7297eacfc9c2a11551838912243321762b986ef89637baefc95e31c068830. Dec 13 01:29:30.198674 systemd[1]: Started cri-containerd-31d495d582398b1d04c1e7e0f212489382c518d7174724fe829101cc9ded9ee0.scope - libcontainer container 31d495d582398b1d04c1e7e0f212489382c518d7174724fe829101cc9ded9ee0. Dec 13 01:29:30.218591 containerd[1711]: time="2024-12-13T01:29:30.218381091Z" level=info msg="StartContainer for \"07f1fc7abb57c7c24ba4c5c36504820c7bc041b60aeb6ed66f7bf6226379d64b\" returns successfully" Dec 13 01:29:30.243840 containerd[1711]: time="2024-12-13T01:29:30.243793112Z" level=info msg="StartContainer for \"d8a7297eacfc9c2a11551838912243321762b986ef89637baefc95e31c068830\" returns successfully" Dec 13 01:29:30.265862 containerd[1711]: time="2024-12-13T01:29:30.265618925Z" level=info msg="StartContainer for \"31d495d582398b1d04c1e7e0f212489382c518d7174724fe829101cc9ded9ee0\" returns successfully" Dec 13 01:29:31.392589 kubelet[2789]: I1213 01:29:31.392230 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:32.529902 kubelet[2789]: E1213 01:29:32.529865 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-ab3ee36414\" not found" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:32.599686 kubelet[2789]: I1213 01:29:32.599645 2789 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:32.599686 kubelet[2789]: E1213 01:29:32.599681 2789 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.2.1-a-ab3ee36414\": node \"ci-4081.2.1-a-ab3ee36414\" not found" Dec 13 01:29:33.092744 kubelet[2789]: I1213 01:29:33.092709 2789 apiserver.go:52] "Watching apiserver" Dec 13 01:29:33.105115 kubelet[2789]: I1213 01:29:33.105081 2789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:29:33.165181 kubelet[2789]: E1213 01:29:33.165146 2789 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:34.338528 systemd[1]: Reloading requested from client PID 3064 ('systemctl') (unit session-9.scope)... Dec 13 01:29:34.338542 systemd[1]: Reloading... Dec 13 01:29:34.422532 zram_generator::config[3113]: No configuration found. Dec 13 01:29:34.516001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:34.605163 systemd[1]: Reloading finished in 266 ms. Dec 13 01:29:34.641531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:34.655475 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:34.655828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:34.662739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:34.769111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:34.779767 (kubelet)[3168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:34.828994 kubelet[3168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:34.828994 kubelet[3168]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:34.828994 kubelet[3168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:34.830748 kubelet[3168]: I1213 01:29:34.829389 3168 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:34.835783 kubelet[3168]: I1213 01:29:34.835758 3168 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:29:34.835891 kubelet[3168]: I1213 01:29:34.835880 3168 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:34.836296 kubelet[3168]: I1213 01:29:34.836282 3168 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:29:34.838149 kubelet[3168]: I1213 01:29:34.838127 3168 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:29:34.840553 kubelet[3168]: I1213 01:29:34.840485 3168 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:34.845263 kubelet[3168]: E1213 01:29:34.845187 3168 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:29:34.845263 kubelet[3168]: I1213 01:29:34.845260 3168 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:29:34.848315 kubelet[3168]: I1213 01:29:34.847983 3168 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:34.848315 kubelet[3168]: I1213 01:29:34.848141 3168 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:29:34.848315 kubelet[3168]: I1213 01:29:34.848224 3168 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:34.848439 kubelet[3168]: I1213 01:29:34.848245 3168 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-a-ab3ee36414","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:29:34.848439 kubelet[3168]: I1213 01:29:34.848402 3168 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:34.848439 kubelet[3168]: I1213 01:29:34.848411 3168 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:29:34.848776 kubelet[3168]: I1213 01:29:34.848441 3168 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:34.848776 kubelet[3168]: I1213 01:29:34.848554 3168 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:29:34.848776 kubelet[3168]: I1213 01:29:34.848568 3168 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:34.848776 kubelet[3168]: I1213 01:29:34.848587 3168 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:29:34.849227 kubelet[3168]: I1213 01:29:34.849204 3168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:34.854321 kubelet[3168]: I1213 01:29:34.854221 3168 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:34.855351 kubelet[3168]: I1213 01:29:34.854943 3168 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:34.859514 kubelet[3168]: I1213 01:29:34.855914 3168 server.go:1269] "Started kubelet" Dec 13 01:29:34.860805 kubelet[3168]: I1213 01:29:34.860779 3168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:34.877730 kubelet[3168]: I1213 01:29:34.877687 3168 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:34.878911 kubelet[3168]: I1213 01:29:34.878894 3168 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:29:34.880566 kubelet[3168]: I1213 01:29:34.880518 3168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:34.881635 kubelet[3168]: I1213 01:29:34.881618 3168 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:34.881934 kubelet[3168]: I1213 01:29:34.881903 3168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:29:34.882933 kubelet[3168]: I1213 01:29:34.882899 3168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:34.883539 kubelet[3168]: I1213 01:29:34.883167 3168 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:29:34.887901 kubelet[3168]: I1213 01:29:34.887873 3168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:34.887901 kubelet[3168]: I1213 01:29:34.887901 3168 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:34.887998 kubelet[3168]: I1213 01:29:34.887920 3168 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:29:34.887998 kubelet[3168]: E1213 01:29:34.887955 3168 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:34.892871 kubelet[3168]: I1213 01:29:34.892731 3168 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:34.892871 kubelet[3168]: I1213 01:29:34.892819 3168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:34.896119 kubelet[3168]: I1213 01:29:34.896101 3168 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:29:34.896426 kubelet[3168]: I1213 01:29:34.896376 3168 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:34.898978 kubelet[3168]: I1213 01:29:34.898318 3168 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:34.953013 kubelet[3168]: I1213 01:29:34.952981 3168 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:34.953510 kubelet[3168]: I1213 01:29:34.953154 3168 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:34.953510 kubelet[3168]: I1213 01:29:34.953178 3168 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:34.953510 kubelet[3168]: I1213 01:29:34.953362 3168 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:29:34.953737 kubelet[3168]: I1213 01:29:34.953373 3168 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:29:34.954023 kubelet[3168]: I1213 01:29:34.953954 3168 policy_none.go:49] "None policy: Start" Dec 13 01:29:34.954836 kubelet[3168]: I1213 01:29:34.954819 3168 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:34.955187 kubelet[3168]: I1213 01:29:34.955173 3168 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:34.955451 kubelet[3168]: I1213 01:29:34.955439 3168 state_mem.go:75] "Updated machine memory state" Dec 13 01:29:34.959594 kubelet[3168]: I1213 01:29:34.959568 3168 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:34.959745 kubelet[3168]: I1213 01:29:34.959716 3168 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:29:34.959786 kubelet[3168]: I1213 01:29:34.959734 3168 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:34.960051 kubelet[3168]: I1213 01:29:34.960025 3168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:34.998151 kubelet[3168]: W1213 01:29:34.998118 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:35.001077 kubelet[3168]: W1213 01:29:35.001054 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:35.001420 kubelet[3168]: W1213 01:29:35.001117 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:35.068455 kubelet[3168]: I1213 01:29:35.066911 3168 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.077778 kubelet[3168]: I1213 01:29:35.077700 3168 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.077946 kubelet[3168]: I1213 01:29:35.077934 3168 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097649 kubelet[3168]: I1213 01:29:35.097605 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097765 kubelet[3168]: I1213 01:29:35.097668 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097765 kubelet[3168]: I1213 01:29:35.097690 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097765 kubelet[3168]: I1213 01:29:35.097709 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097765 kubelet[3168]: I1213 01:29:35.097725 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097765 kubelet[3168]: I1213 01:29:35.097741 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097883 kubelet[3168]: I1213 01:29:35.097755 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45bc4eecec607b79f91a3515c18b443a-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" (UID: \"45bc4eecec607b79f91a3515c18b443a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097883 kubelet[3168]: I1213 01:29:35.097770 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8a23d3c3c2a3b8fd0ecca72a9755d31-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-ab3ee36414\" (UID: \"e8a23d3c3c2a3b8fd0ecca72a9755d31\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.097883 kubelet[3168]: I1213 01:29:35.097787 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7e60c39da16215647f47c5ccb6d4fcb-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-ab3ee36414\" (UID: \"e7e60c39da16215647f47c5ccb6d4fcb\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:35.850441 kubelet[3168]: I1213 01:29:35.850210 3168 apiserver.go:52] "Watching apiserver" Dec 13 01:29:35.897218 kubelet[3168]: I1213 01:29:35.897154 3168 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:29:35.937989 kubelet[3168]: W1213 01:29:35.937906 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:35.938315 kubelet[3168]: E1213 01:29:35.937961 3168 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-ab3ee36414\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" Dec 13 01:29:36.006645 kubelet[3168]: I1213 01:29:36.006579 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-ab3ee36414" podStartSLOduration=2.00656118 podStartE2EDuration="2.00656118s" podCreationTimestamp="2024-12-13 01:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:35.980443277 +0000 UTC m=+1.197841963" watchObservedRunningTime="2024-12-13 01:29:36.00656118 +0000 UTC m=+1.223959906" Dec 13 01:29:36.007337 kubelet[3168]: I1213 01:29:36.007199 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-ab3ee36414" podStartSLOduration=2.007189061 podStartE2EDuration="2.007189061s" podCreationTimestamp="2024-12-13 01:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:36.00637846 +0000 UTC m=+1.223777146" watchObservedRunningTime="2024-12-13 01:29:36.007189061 +0000 UTC m=+1.224587787" Dec 13 01:29:36.040923 kubelet[3168]: I1213 01:29:36.040809 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-ab3ee36414" podStartSLOduration=2.040790102 podStartE2EDuration="2.040790102s" podCreationTimestamp="2024-12-13 01:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:36.040164061 +0000 UTC m=+1.257562787" watchObservedRunningTime="2024-12-13 01:29:36.040790102 +0000 UTC m=+1.258188828" Dec 13 01:29:39.878071 sudo[2304]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:39.963480 sshd[2235]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:39.967509 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:42646.service: Deactivated successfully. Dec 13 01:29:39.970031 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:29:39.970814 systemd[1]: session-9.scope: Consumed 7.477s CPU time, 152.1M memory peak, 0B memory swap peak. Dec 13 01:29:39.971277 systemd-logind[1680]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:29:39.972335 systemd-logind[1680]: Removed session 9. Dec 13 01:29:40.862020 kubelet[3168]: I1213 01:29:40.861933 3168 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:29:40.865057 containerd[1711]: time="2024-12-13T01:29:40.864502440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:29:40.865357 kubelet[3168]: I1213 01:29:40.864852 3168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:29:41.713278 systemd[1]: Created slice kubepods-besteffort-podf828bf4d_5b4f_4022_a14f_2ae560113e75.slice - libcontainer container kubepods-besteffort-podf828bf4d_5b4f_4022_a14f_2ae560113e75.slice. Dec 13 01:29:41.742525 kubelet[3168]: I1213 01:29:41.742464 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f828bf4d-5b4f-4022-a14f-2ae560113e75-kube-proxy\") pod \"kube-proxy-2c828\" (UID: \"f828bf4d-5b4f-4022-a14f-2ae560113e75\") " pod="kube-system/kube-proxy-2c828" Dec 13 01:29:41.742525 kubelet[3168]: I1213 01:29:41.742528 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rkc\" (UniqueName: \"kubernetes.io/projected/f828bf4d-5b4f-4022-a14f-2ae560113e75-kube-api-access-w8rkc\") pod \"kube-proxy-2c828\" (UID: \"f828bf4d-5b4f-4022-a14f-2ae560113e75\") " pod="kube-system/kube-proxy-2c828" Dec 13 01:29:41.742675 kubelet[3168]: I1213 01:29:41.742554 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f828bf4d-5b4f-4022-a14f-2ae560113e75-xtables-lock\") pod \"kube-proxy-2c828\" (UID: \"f828bf4d-5b4f-4022-a14f-2ae560113e75\") " pod="kube-system/kube-proxy-2c828" Dec 13 01:29:41.742675 kubelet[3168]: I1213 01:29:41.742569 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f828bf4d-5b4f-4022-a14f-2ae560113e75-lib-modules\") pod \"kube-proxy-2c828\" (UID: \"f828bf4d-5b4f-4022-a14f-2ae560113e75\") " pod="kube-system/kube-proxy-2c828" Dec 13 01:29:41.983782 systemd[1]: Created slice kubepods-besteffort-pod6e64edcb_cdee_4aac_8c43_0c4f2b4106fa.slice - libcontainer container kubepods-besteffort-pod6e64edcb_cdee_4aac_8c43_0c4f2b4106fa.slice. Dec 13 01:29:42.021597 containerd[1711]: time="2024-12-13T01:29:42.021509893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c828,Uid:f828bf4d-5b4f-4022-a14f-2ae560113e75,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:42.045179 kubelet[3168]: I1213 01:29:42.045070 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e64edcb-cdee-4aac-8c43-0c4f2b4106fa-var-lib-calico\") pod \"tigera-operator-76c4976dd7-xhfbp\" (UID: \"6e64edcb-cdee-4aac-8c43-0c4f2b4106fa\") " pod="tigera-operator/tigera-operator-76c4976dd7-xhfbp" Dec 13 01:29:42.045179 kubelet[3168]: I1213 01:29:42.045114 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnf4\" (UniqueName: \"kubernetes.io/projected/6e64edcb-cdee-4aac-8c43-0c4f2b4106fa-kube-api-access-tfnf4\") pod \"tigera-operator-76c4976dd7-xhfbp\" (UID: \"6e64edcb-cdee-4aac-8c43-0c4f2b4106fa\") " pod="tigera-operator/tigera-operator-76c4976dd7-xhfbp" Dec 13 01:29:42.056527 containerd[1711]: time="2024-12-13T01:29:42.056405650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:42.056527 containerd[1711]: time="2024-12-13T01:29:42.056458810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:42.056527 containerd[1711]: time="2024-12-13T01:29:42.056474651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:42.056981 containerd[1711]: time="2024-12-13T01:29:42.056587491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:42.074710 systemd[1]: Started cri-containerd-65fbd28bb945f29f32940c9a73c6d16c347f05867c697f2c0b2fb03221ed078b.scope - libcontainer container 65fbd28bb945f29f32940c9a73c6d16c347f05867c697f2c0b2fb03221ed078b. Dec 13 01:29:42.093914 containerd[1711]: time="2024-12-13T01:29:42.093869054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c828,Uid:f828bf4d-5b4f-4022-a14f-2ae560113e75,Namespace:kube-system,Attempt:0,} returns sandbox id \"65fbd28bb945f29f32940c9a73c6d16c347f05867c697f2c0b2fb03221ed078b\"" Dec 13 01:29:42.097448 containerd[1711]: time="2024-12-13T01:29:42.097266381Z" level=info msg="CreateContainer within sandbox \"65fbd28bb945f29f32940c9a73c6d16c347f05867c697f2c0b2fb03221ed078b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:29:42.135804 containerd[1711]: time="2024-12-13T01:29:42.135754467Z" level=info msg="CreateContainer within sandbox \"65fbd28bb945f29f32940c9a73c6d16c347f05867c697f2c0b2fb03221ed078b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfbc9a3369261d437cef164278f7a8925e5c3611a7006ba6179803ed069465d4\"" Dec 13 01:29:42.137627 containerd[1711]: time="2024-12-13T01:29:42.136718669Z" level=info msg="StartContainer for \"dfbc9a3369261d437cef164278f7a8925e5c3611a7006ba6179803ed069465d4\"" Dec 13 01:29:42.167650 systemd[1]: Started cri-containerd-dfbc9a3369261d437cef164278f7a8925e5c3611a7006ba6179803ed069465d4.scope - libcontainer container dfbc9a3369261d437cef164278f7a8925e5c3611a7006ba6179803ed069465d4. Dec 13 01:29:42.195244 containerd[1711]: time="2024-12-13T01:29:42.195180439Z" level=info msg="StartContainer for \"dfbc9a3369261d437cef164278f7a8925e5c3611a7006ba6179803ed069465d4\" returns successfully" Dec 13 01:29:42.288176 containerd[1711]: time="2024-12-13T01:29:42.288076526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xhfbp,Uid:6e64edcb-cdee-4aac-8c43-0c4f2b4106fa,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:29:42.331803 containerd[1711]: time="2024-12-13T01:29:42.331390982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:42.331803 containerd[1711]: time="2024-12-13T01:29:42.331732663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:42.331803 containerd[1711]: time="2024-12-13T01:29:42.331768703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:42.332082 containerd[1711]: time="2024-12-13T01:29:42.331907783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:42.350718 systemd[1]: Started cri-containerd-4bf2012f54d130dc3984f442b6f978d0d6caf25efcafde2e3b803fc95ce9bf79.scope - libcontainer container 4bf2012f54d130dc3984f442b6f978d0d6caf25efcafde2e3b803fc95ce9bf79. Dec 13 01:29:42.376624 containerd[1711]: time="2024-12-13T01:29:42.376579962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xhfbp,Uid:6e64edcb-cdee-4aac-8c43-0c4f2b4106fa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4bf2012f54d130dc3984f442b6f978d0d6caf25efcafde2e3b803fc95ce9bf79\"" Dec 13 01:29:42.378437 containerd[1711]: time="2024-12-13T01:29:42.378401686Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:29:47.752802 kubelet[3168]: I1213 01:29:47.752471 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2c828" podStartSLOduration=6.75245402 podStartE2EDuration="6.75245402s" podCreationTimestamp="2024-12-13 01:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:42.952242203 +0000 UTC m=+8.169640929" watchObservedRunningTime="2024-12-13 01:29:47.75245402 +0000 UTC m=+12.969852746" Dec 13 01:29:47.864838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072703928.mount: Deactivated successfully. Dec 13 01:29:48.188547 containerd[1711]: time="2024-12-13T01:29:48.187802471Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:48.190002 containerd[1711]: time="2024-12-13T01:29:48.189972316Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125980" Dec 13 01:29:48.192620 containerd[1711]: time="2024-12-13T01:29:48.192560082Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:48.196360 containerd[1711]: time="2024-12-13T01:29:48.196316050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:48.197123 containerd[1711]: time="2024-12-13T01:29:48.197022572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.818584526s" Dec 13 01:29:48.197123 containerd[1711]: time="2024-12-13T01:29:48.197051012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:29:48.199994 containerd[1711]: time="2024-12-13T01:29:48.199802018Z" level=info msg="CreateContainer within sandbox \"4bf2012f54d130dc3984f442b6f978d0d6caf25efcafde2e3b803fc95ce9bf79\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:29:48.236380 containerd[1711]: time="2024-12-13T01:29:48.236299660Z" level=info msg="CreateContainer within sandbox \"4bf2012f54d130dc3984f442b6f978d0d6caf25efcafde2e3b803fc95ce9bf79\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c4bddc8177216e6fbbe210faa2bb16cf85febbbc13f4706dbbc7dc8b69901ad2\"" Dec 13 01:29:48.237099 containerd[1711]: time="2024-12-13T01:29:48.236904301Z" level=info msg="StartContainer for \"c4bddc8177216e6fbbe210faa2bb16cf85febbbc13f4706dbbc7dc8b69901ad2\"" Dec 13 01:29:48.266668 systemd[1]: Started cri-containerd-c4bddc8177216e6fbbe210faa2bb16cf85febbbc13f4706dbbc7dc8b69901ad2.scope - libcontainer container c4bddc8177216e6fbbe210faa2bb16cf85febbbc13f4706dbbc7dc8b69901ad2. Dec 13 01:29:48.292145 containerd[1711]: time="2024-12-13T01:29:48.292030344Z" level=info msg="StartContainer for \"c4bddc8177216e6fbbe210faa2bb16cf85febbbc13f4706dbbc7dc8b69901ad2\" returns successfully" Dec 13 01:29:49.135476 kubelet[3168]: I1213 01:29:49.135418 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-xhfbp" podStartSLOduration=2.315684577 podStartE2EDuration="8.135401545s" podCreationTimestamp="2024-12-13 01:29:41 +0000 UTC" firstStartedPulling="2024-12-13 01:29:42.378014046 +0000 UTC m=+7.595412772" lastFinishedPulling="2024-12-13 01:29:48.197731054 +0000 UTC m=+13.415129740" observedRunningTime="2024-12-13 01:29:48.963588601 +0000 UTC m=+14.180987327" watchObservedRunningTime="2024-12-13 01:29:49.135401545 +0000 UTC m=+14.352800271" Dec 13 01:29:52.050933 systemd[1]: Created slice kubepods-besteffort-pod52a4a92d_421b_4e8a_b175_f7b08931a1ce.slice - libcontainer container kubepods-besteffort-pod52a4a92d_421b_4e8a_b175_f7b08931a1ce.slice. Dec 13 01:29:52.102324 kubelet[3168]: I1213 01:29:52.102188 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/52a4a92d-421b-4e8a-b175-f7b08931a1ce-typha-certs\") pod \"calico-typha-799b6dc4f4-xxtsb\" (UID: \"52a4a92d-421b-4e8a-b175-f7b08931a1ce\") " pod="calico-system/calico-typha-799b6dc4f4-xxtsb" Dec 13 01:29:52.102324 kubelet[3168]: I1213 01:29:52.102226 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d66\" (UniqueName: \"kubernetes.io/projected/52a4a92d-421b-4e8a-b175-f7b08931a1ce-kube-api-access-47d66\") pod \"calico-typha-799b6dc4f4-xxtsb\" (UID: \"52a4a92d-421b-4e8a-b175-f7b08931a1ce\") " pod="calico-system/calico-typha-799b6dc4f4-xxtsb" Dec 13 01:29:52.102751 kubelet[3168]: I1213 01:29:52.102343 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a4a92d-421b-4e8a-b175-f7b08931a1ce-tigera-ca-bundle\") pod \"calico-typha-799b6dc4f4-xxtsb\" (UID: \"52a4a92d-421b-4e8a-b175-f7b08931a1ce\") " pod="calico-system/calico-typha-799b6dc4f4-xxtsb" Dec 13 01:29:52.138761 systemd[1]: Created slice kubepods-besteffort-podac1b6b70_e02e_4133_9078_db4dc09296f1.slice - libcontainer container kubepods-besteffort-podac1b6b70_e02e_4133_9078_db4dc09296f1.slice. Dec 13 01:29:52.203718 kubelet[3168]: I1213 01:29:52.203672 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-cni-net-dir\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203718 kubelet[3168]: I1213 01:29:52.203719 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1b6b70-e02e-4133-9078-db4dc09296f1-tigera-ca-bundle\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203879 kubelet[3168]: I1213 01:29:52.203738 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvxc\" (UniqueName: \"kubernetes.io/projected/ac1b6b70-e02e-4133-9078-db4dc09296f1-kube-api-access-frvxc\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203879 kubelet[3168]: I1213 01:29:52.203757 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-xtables-lock\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203879 kubelet[3168]: I1213 01:29:52.203773 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-policysync\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203879 kubelet[3168]: I1213 01:29:52.203787 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-var-run-calico\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203879 kubelet[3168]: I1213 01:29:52.203803 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ac1b6b70-e02e-4133-9078-db4dc09296f1-node-certs\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203992 kubelet[3168]: I1213 01:29:52.203819 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-lib-modules\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203992 kubelet[3168]: I1213 01:29:52.203845 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-cni-log-dir\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203992 kubelet[3168]: I1213 01:29:52.203861 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-flexvol-driver-host\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203992 kubelet[3168]: I1213 01:29:52.203877 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-var-lib-calico\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.203992 kubelet[3168]: I1213 01:29:52.203901 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ac1b6b70-e02e-4133-9078-db4dc09296f1-cni-bin-dir\") pod \"calico-node-kz488\" (UID: \"ac1b6b70-e02e-4133-9078-db4dc09296f1\") " pod="calico-system/calico-node-kz488" Dec 13 01:29:52.255213 kubelet[3168]: E1213 01:29:52.254537 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:29:52.304635 kubelet[3168]: I1213 01:29:52.304536 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a987f675-3896-4490-b719-7c769af12cf2-kubelet-dir\") pod \"csi-node-driver-9kq89\" (UID: \"a987f675-3896-4490-b719-7c769af12cf2\") " pod="calico-system/csi-node-driver-9kq89" Dec 13 01:29:52.306335 kubelet[3168]: I1213 01:29:52.304756 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz2h\" (UniqueName: \"kubernetes.io/projected/a987f675-3896-4490-b719-7c769af12cf2-kube-api-access-xqz2h\") pod \"csi-node-driver-9kq89\" (UID: \"a987f675-3896-4490-b719-7c769af12cf2\") " pod="calico-system/csi-node-driver-9kq89" Dec 13 01:29:52.306335 kubelet[3168]: I1213 01:29:52.304825 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a987f675-3896-4490-b719-7c769af12cf2-registration-dir\") pod \"csi-node-driver-9kq89\" (UID: \"a987f675-3896-4490-b719-7c769af12cf2\") " pod="calico-system/csi-node-driver-9kq89" Dec 13 01:29:52.306335 kubelet[3168]: I1213 01:29:52.304885 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a987f675-3896-4490-b719-7c769af12cf2-varrun\") pod \"csi-node-driver-9kq89\" (UID: \"a987f675-3896-4490-b719-7c769af12cf2\") " pod="calico-system/csi-node-driver-9kq89" Dec 13 01:29:52.306335 kubelet[3168]: I1213 01:29:52.304911 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a987f675-3896-4490-b719-7c769af12cf2-socket-dir\") pod \"csi-node-driver-9kq89\" (UID: \"a987f675-3896-4490-b719-7c769af12cf2\") " pod="calico-system/csi-node-driver-9kq89" Dec 13 01:29:52.311602 kubelet[3168]: E1213 01:29:52.311574 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.311602 kubelet[3168]: W1213 01:29:52.311597 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.311709 kubelet[3168]: E1213 01:29:52.311617 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.316328 kubelet[3168]: E1213 01:29:52.316312 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.316462 kubelet[3168]: W1213 01:29:52.316449 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.316575 kubelet[3168]: E1213 01:29:52.316563 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.328045 kubelet[3168]: E1213 01:29:52.328029 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.328218 kubelet[3168]: W1213 01:29:52.328203 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.328299 kubelet[3168]: E1213 01:29:52.328289 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.356852 containerd[1711]: time="2024-12-13T01:29:52.356760725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799b6dc4f4-xxtsb,Uid:52a4a92d-421b-4e8a-b175-f7b08931a1ce,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:52.405940 kubelet[3168]: E1213 01:29:52.405909 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.406262 kubelet[3168]: W1213 01:29:52.406092 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.406262 kubelet[3168]: E1213 01:29:52.406119 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.406592 kubelet[3168]: E1213 01:29:52.406575 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.406679 kubelet[3168]: W1213 01:29:52.406667 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.406753 kubelet[3168]: E1213 01:29:52.406743 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.407051 kubelet[3168]: E1213 01:29:52.407037 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.407160 kubelet[3168]: W1213 01:29:52.407139 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.407300 kubelet[3168]: E1213 01:29:52.407220 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.407525 containerd[1711]: time="2024-12-13T01:29:52.407125277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:52.407525 containerd[1711]: time="2024-12-13T01:29:52.407183357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:52.407525 containerd[1711]: time="2024-12-13T01:29:52.407194837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.407525 containerd[1711]: time="2024-12-13T01:29:52.407276917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.408063 kubelet[3168]: E1213 01:29:52.407887 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.408063 kubelet[3168]: W1213 01:29:52.407901 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.408846 kubelet[3168]: E1213 01:29:52.408386 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.409829 kubelet[3168]: E1213 01:29:52.409077 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.409829 kubelet[3168]: W1213 01:29:52.409097 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.410095 kubelet[3168]: E1213 01:29:52.409942 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.410313 kubelet[3168]: E1213 01:29:52.410300 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.410664 kubelet[3168]: W1213 01:29:52.410519 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.411283 kubelet[3168]: E1213 01:29:52.411157 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.411667 kubelet[3168]: E1213 01:29:52.411652 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.411945 kubelet[3168]: W1213 01:29:52.411727 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.412607 kubelet[3168]: E1213 01:29:52.412047 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.412766 kubelet[3168]: E1213 01:29:52.412753 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.413260 kubelet[3168]: W1213 01:29:52.413238 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.413675 kubelet[3168]: E1213 01:29:52.413635 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.414375 kubelet[3168]: E1213 01:29:52.414237 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.414550 kubelet[3168]: W1213 01:29:52.414459 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.414773 kubelet[3168]: E1213 01:29:52.414692 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.414960 kubelet[3168]: E1213 01:29:52.414901 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.414960 kubelet[3168]: W1213 01:29:52.414913 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.415138 kubelet[3168]: E1213 01:29:52.415052 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.415406 kubelet[3168]: E1213 01:29:52.415339 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.415406 kubelet[3168]: W1213 01:29:52.415355 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.415656 kubelet[3168]: E1213 01:29:52.415508 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.415776 kubelet[3168]: E1213 01:29:52.415766 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.416223 kubelet[3168]: W1213 01:29:52.415853 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.416223 kubelet[3168]: E1213 01:29:52.415956 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.416822 kubelet[3168]: E1213 01:29:52.416725 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.417366 kubelet[3168]: W1213 01:29:52.417203 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.417366 kubelet[3168]: E1213 01:29:52.417301 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.418206 kubelet[3168]: E1213 01:29:52.417858 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.418206 kubelet[3168]: W1213 01:29:52.417871 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.418206 kubelet[3168]: E1213 01:29:52.418048 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.418680 kubelet[3168]: E1213 01:29:52.418472 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.419462 kubelet[3168]: W1213 01:29:52.418955 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.419722 kubelet[3168]: E1213 01:29:52.419654 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.420253 kubelet[3168]: E1213 01:29:52.420158 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.420253 kubelet[3168]: W1213 01:29:52.420172 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.420445 kubelet[3168]: E1213 01:29:52.420366 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.420611 kubelet[3168]: E1213 01:29:52.420568 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.420611 kubelet[3168]: W1213 01:29:52.420596 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.421024 kubelet[3168]: E1213 01:29:52.420753 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.421547 kubelet[3168]: E1213 01:29:52.421295 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.421547 kubelet[3168]: W1213 01:29:52.421307 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.421740 kubelet[3168]: E1213 01:29:52.421664 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.421995 kubelet[3168]: E1213 01:29:52.421908 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.421995 kubelet[3168]: W1213 01:29:52.421924 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.421995 kubelet[3168]: E1213 01:29:52.422013 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.422273 kubelet[3168]: E1213 01:29:52.422206 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.422273 kubelet[3168]: W1213 01:29:52.422216 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.422483 kubelet[3168]: E1213 01:29:52.422256 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.422578 kubelet[3168]: E1213 01:29:52.422558 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.422578 kubelet[3168]: W1213 01:29:52.422568 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.422714 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.423470 kubelet[3168]: W1213 01:29:52.422728 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.422739 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.422891 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.423470 kubelet[3168]: W1213 01:29:52.422897 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.422906 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.423039 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.423470 kubelet[3168]: W1213 01:29:52.423046 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.423054 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.423470 kubelet[3168]: E1213 01:29:52.423066 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.424924 kubelet[3168]: E1213 01:29:52.423297 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.424924 kubelet[3168]: W1213 01:29:52.423305 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.424924 kubelet[3168]: E1213 01:29:52.423314 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.431651 systemd[1]: Started cri-containerd-6122a49e09e0a34341354db47b3c317a7a6d4d12875ea82c91d50458448de6f9.scope - libcontainer container 6122a49e09e0a34341354db47b3c317a7a6d4d12875ea82c91d50458448de6f9. Dec 13 01:29:52.436528 kubelet[3168]: E1213 01:29:52.436400 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:52.436528 kubelet[3168]: W1213 01:29:52.436424 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:52.436528 kubelet[3168]: E1213 01:29:52.436442 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:52.442146 containerd[1711]: time="2024-12-13T01:29:52.442031474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kz488,Uid:ac1b6b70-e02e-4133-9078-db4dc09296f1,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:52.462244 containerd[1711]: time="2024-12-13T01:29:52.461988959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799b6dc4f4-xxtsb,Uid:52a4a92d-421b-4e8a-b175-f7b08931a1ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"6122a49e09e0a34341354db47b3c317a7a6d4d12875ea82c91d50458448de6f9\"" Dec 13 01:29:52.465648 containerd[1711]: time="2024-12-13T01:29:52.465469646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:29:52.488183 containerd[1711]: time="2024-12-13T01:29:52.487980296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:52.488183 containerd[1711]: time="2024-12-13T01:29:52.488043096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:52.488183 containerd[1711]: time="2024-12-13T01:29:52.488067256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.488183 containerd[1711]: time="2024-12-13T01:29:52.488139137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.507650 systemd[1]: Started cri-containerd-09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66.scope - libcontainer container 09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66. Dec 13 01:29:52.533034 containerd[1711]: time="2024-12-13T01:29:52.532909516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kz488,Uid:ac1b6b70-e02e-4133-9078-db4dc09296f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\"" Dec 13 01:29:53.592045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307269654.mount: Deactivated successfully. Dec 13 01:29:53.888505 kubelet[3168]: E1213 01:29:53.888465 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:29:54.056001 containerd[1711]: time="2024-12-13T01:29:54.055956331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:54.059093 containerd[1711]: time="2024-12-13T01:29:54.059061858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:29:54.062810 containerd[1711]: time="2024-12-13T01:29:54.062738906Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:54.068209 containerd[1711]: time="2024-12-13T01:29:54.067825917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:54.069231 containerd[1711]: time="2024-12-13T01:29:54.069121480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.603583233s" Dec 13 01:29:54.069231 containerd[1711]: time="2024-12-13T01:29:54.069155440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:29:54.071563 containerd[1711]: time="2024-12-13T01:29:54.071539766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:29:54.084771 containerd[1711]: time="2024-12-13T01:29:54.083274632Z" level=info msg="CreateContainer within sandbox \"6122a49e09e0a34341354db47b3c317a7a6d4d12875ea82c91d50458448de6f9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:29:54.124893 containerd[1711]: time="2024-12-13T01:29:54.124844884Z" level=info msg="CreateContainer within sandbox \"6122a49e09e0a34341354db47b3c317a7a6d4d12875ea82c91d50458448de6f9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db72e50a76572e6c333e20d0bb83e5f7aab462940dd745b125c4bc55a85de0f2\"" Dec 13 01:29:54.126547 containerd[1711]: time="2024-12-13T01:29:54.125645686Z" level=info msg="StartContainer for \"db72e50a76572e6c333e20d0bb83e5f7aab462940dd745b125c4bc55a85de0f2\"" Dec 13 01:29:54.152691 systemd[1]: Started cri-containerd-db72e50a76572e6c333e20d0bb83e5f7aab462940dd745b125c4bc55a85de0f2.scope - libcontainer container db72e50a76572e6c333e20d0bb83e5f7aab462940dd745b125c4bc55a85de0f2. Dec 13 01:29:54.185047 containerd[1711]: time="2024-12-13T01:29:54.185003537Z" level=info msg="StartContainer for \"db72e50a76572e6c333e20d0bb83e5f7aab462940dd745b125c4bc55a85de0f2\" returns successfully" Dec 13 01:29:54.984484 kubelet[3168]: I1213 01:29:54.984423 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-799b6dc4f4-xxtsb" podStartSLOduration=1.378732951 podStartE2EDuration="2.984409869s" podCreationTimestamp="2024-12-13 01:29:52 +0000 UTC" firstStartedPulling="2024-12-13 01:29:52.465162886 +0000 UTC m=+17.682561612" lastFinishedPulling="2024-12-13 01:29:54.070839804 +0000 UTC m=+19.288238530" observedRunningTime="2024-12-13 01:29:54.984120348 +0000 UTC m=+20.201519074" watchObservedRunningTime="2024-12-13 01:29:54.984409869 +0000 UTC m=+20.201808555" Dec 13 01:29:55.017568 kubelet[3168]: E1213 01:29:55.017537 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.017568 kubelet[3168]: W1213 01:29:55.017561 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.017716 kubelet[3168]: E1213 01:29:55.017580 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.017869 kubelet[3168]: E1213 01:29:55.017849 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.017869 kubelet[3168]: W1213 01:29:55.017869 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.017931 kubelet[3168]: E1213 01:29:55.017881 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018059 kubelet[3168]: E1213 01:29:55.018036 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018059 kubelet[3168]: W1213 01:29:55.018055 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018184 kubelet[3168]: E1213 01:29:55.018064 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018223 kubelet[3168]: E1213 01:29:55.018208 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018223 kubelet[3168]: W1213 01:29:55.018216 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018265 kubelet[3168]: E1213 01:29:55.018223 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018387 kubelet[3168]: E1213 01:29:55.018369 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018387 kubelet[3168]: W1213 01:29:55.018383 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018433 kubelet[3168]: E1213 01:29:55.018392 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018586 kubelet[3168]: E1213 01:29:55.018569 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018586 kubelet[3168]: W1213 01:29:55.018582 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018649 kubelet[3168]: E1213 01:29:55.018590 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018750 kubelet[3168]: E1213 01:29:55.018731 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018750 kubelet[3168]: W1213 01:29:55.018745 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018874 kubelet[3168]: E1213 01:29:55.018753 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.018912 kubelet[3168]: E1213 01:29:55.018898 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.018912 kubelet[3168]: W1213 01:29:55.018905 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.018955 kubelet[3168]: E1213 01:29:55.018912 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.019111 kubelet[3168]: E1213 01:29:55.019092 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.019111 kubelet[3168]: W1213 01:29:55.019107 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.019158 kubelet[3168]: E1213 01:29:55.019116 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.019294 kubelet[3168]: E1213 01:29:55.019276 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.019294 kubelet[3168]: W1213 01:29:55.019289 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.019352 kubelet[3168]: E1213 01:29:55.019297 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.019452 kubelet[3168]: E1213 01:29:55.019434 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.019452 kubelet[3168]: W1213 01:29:55.019448 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.019589 kubelet[3168]: E1213 01:29:55.019456 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.019630 kubelet[3168]: E1213 01:29:55.019616 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.019630 kubelet[3168]: W1213 01:29:55.019623 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.019680 kubelet[3168]: E1213 01:29:55.019633 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.019824 kubelet[3168]: E1213 01:29:55.019804 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.019824 kubelet[3168]: W1213 01:29:55.019820 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.019879 kubelet[3168]: E1213 01:29:55.019829 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.020020 kubelet[3168]: E1213 01:29:55.020002 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.020066 kubelet[3168]: W1213 01:29:55.020050 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.020066 kubelet[3168]: E1213 01:29:55.020063 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.020292 kubelet[3168]: E1213 01:29:55.020272 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.020292 kubelet[3168]: W1213 01:29:55.020287 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.020357 kubelet[3168]: E1213 01:29:55.020297 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.035683 kubelet[3168]: E1213 01:29:55.035661 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.035683 kubelet[3168]: W1213 01:29:55.035679 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.035782 kubelet[3168]: E1213 01:29:55.035691 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.035910 kubelet[3168]: E1213 01:29:55.035891 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.035910 kubelet[3168]: W1213 01:29:55.035907 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.035973 kubelet[3168]: E1213 01:29:55.035926 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.036141 kubelet[3168]: E1213 01:29:55.036124 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.036141 kubelet[3168]: W1213 01:29:55.036138 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.036200 kubelet[3168]: E1213 01:29:55.036153 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.036324 kubelet[3168]: E1213 01:29:55.036310 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.036324 kubelet[3168]: W1213 01:29:55.036322 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.036389 kubelet[3168]: E1213 01:29:55.036334 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.036590 kubelet[3168]: E1213 01:29:55.036573 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.036590 kubelet[3168]: W1213 01:29:55.036589 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.036657 kubelet[3168]: E1213 01:29:55.036603 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.036790 kubelet[3168]: E1213 01:29:55.036775 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.036790 kubelet[3168]: W1213 01:29:55.036788 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.036847 kubelet[3168]: E1213 01:29:55.036820 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.037032 kubelet[3168]: E1213 01:29:55.037014 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.037032 kubelet[3168]: W1213 01:29:55.037027 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.037109 kubelet[3168]: E1213 01:29:55.037091 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.037439 kubelet[3168]: E1213 01:29:55.037416 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.037439 kubelet[3168]: W1213 01:29:55.037433 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.037573 kubelet[3168]: E1213 01:29:55.037541 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.037609 kubelet[3168]: E1213 01:29:55.037583 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.037674 kubelet[3168]: W1213 01:29:55.037615 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.037674 kubelet[3168]: E1213 01:29:55.037642 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.037821 kubelet[3168]: E1213 01:29:55.037803 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.037821 kubelet[3168]: W1213 01:29:55.037814 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.037882 kubelet[3168]: E1213 01:29:55.037831 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.038107 kubelet[3168]: E1213 01:29:55.038085 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.038107 kubelet[3168]: W1213 01:29:55.038102 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.038184 kubelet[3168]: E1213 01:29:55.038118 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.038304 kubelet[3168]: E1213 01:29:55.038284 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.038304 kubelet[3168]: W1213 01:29:55.038303 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.038364 kubelet[3168]: E1213 01:29:55.038321 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.038517 kubelet[3168]: E1213 01:29:55.038501 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.038517 kubelet[3168]: W1213 01:29:55.038515 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.038577 kubelet[3168]: E1213 01:29:55.038528 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.038848 kubelet[3168]: E1213 01:29:55.038829 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.038848 kubelet[3168]: W1213 01:29:55.038844 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.038964 kubelet[3168]: E1213 01:29:55.038934 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.039015 kubelet[3168]: E1213 01:29:55.038980 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.039015 kubelet[3168]: W1213 01:29:55.038987 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.039015 kubelet[3168]: E1213 01:29:55.038997 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.039196 kubelet[3168]: E1213 01:29:55.039160 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.039196 kubelet[3168]: W1213 01:29:55.039168 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.039196 kubelet[3168]: E1213 01:29:55.039177 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.039337 kubelet[3168]: E1213 01:29:55.039323 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.039337 kubelet[3168]: W1213 01:29:55.039335 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.039389 kubelet[3168]: E1213 01:29:55.039346 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.039705 kubelet[3168]: E1213 01:29:55.039688 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:55.039705 kubelet[3168]: W1213 01:29:55.039703 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:55.039774 kubelet[3168]: E1213 01:29:55.039713 3168 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:55.225837 containerd[1711]: time="2024-12-13T01:29:55.225549283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:55.227850 containerd[1711]: time="2024-12-13T01:29:55.227816248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:29:55.231350 containerd[1711]: time="2024-12-13T01:29:55.231306016Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:55.236532 containerd[1711]: time="2024-12-13T01:29:55.236402227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:55.237982 containerd[1711]: time="2024-12-13T01:29:55.237656550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.166034784s" Dec 13 01:29:55.237982 containerd[1711]: time="2024-12-13T01:29:55.237688470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:29:55.240830 containerd[1711]: time="2024-12-13T01:29:55.240797717Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:29:55.296190 containerd[1711]: time="2024-12-13T01:29:55.296146200Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b\"" Dec 13 01:29:55.297069 containerd[1711]: time="2024-12-13T01:29:55.297036761Z" level=info msg="StartContainer for \"acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b\"" Dec 13 01:29:55.331648 systemd[1]: Started cri-containerd-acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b.scope - libcontainer container acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b. Dec 13 01:29:55.364377 containerd[1711]: time="2024-12-13T01:29:55.364032030Z" level=info msg="StartContainer for \"acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b\" returns successfully" Dec 13 01:29:55.381448 systemd[1]: cri-containerd-acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b.scope: Deactivated successfully. Dec 13 01:29:55.400788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b-rootfs.mount: Deactivated successfully. Dec 13 01:29:55.888784 kubelet[3168]: E1213 01:29:55.888733 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:29:55.972225 kubelet[3168]: I1213 01:29:55.971151 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:56.293461 containerd[1711]: time="2024-12-13T01:29:56.293328929Z" level=info msg="shim disconnected" id=acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b namespace=k8s.io Dec 13 01:29:56.293461 containerd[1711]: time="2024-12-13T01:29:56.293383530Z" level=warning msg="cleaning up after shim disconnected" id=acc19e5c08972da47cf58fdab241b3c4e458b0f07ef945678ab5058df9cea09b namespace=k8s.io Dec 13 01:29:56.293461 containerd[1711]: time="2024-12-13T01:29:56.293392770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:56.975651 containerd[1711]: time="2024-12-13T01:29:56.975608121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:29:57.888869 kubelet[3168]: E1213 01:29:57.888814 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:29:59.835559 containerd[1711]: time="2024-12-13T01:29:59.835475899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:59.841394 containerd[1711]: time="2024-12-13T01:29:59.841251152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:29:59.843627 containerd[1711]: time="2024-12-13T01:29:59.843596557Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:59.848011 containerd[1711]: time="2024-12-13T01:29:59.847753486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:59.848472 containerd[1711]: time="2024-12-13T01:29:59.848440928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.872791007s" Dec 13 01:29:59.848472 containerd[1711]: time="2024-12-13T01:29:59.848470048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:29:59.851571 containerd[1711]: time="2024-12-13T01:29:59.851444135Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:29:59.888759 kubelet[3168]: E1213 01:29:59.888694 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:29:59.899127 containerd[1711]: time="2024-12-13T01:29:59.899030920Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9\"" Dec 13 01:29:59.900250 containerd[1711]: time="2024-12-13T01:29:59.899594801Z" level=info msg="StartContainer for \"6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9\"" Dec 13 01:29:59.930643 systemd[1]: Started cri-containerd-6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9.scope - libcontainer container 6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9. Dec 13 01:29:59.957677 containerd[1711]: time="2024-12-13T01:29:59.957632610Z" level=info msg="StartContainer for \"6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9\" returns successfully" Dec 13 01:30:01.378701 systemd[1]: cri-containerd-6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9.scope: Deactivated successfully. Dec 13 01:30:01.396938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9-rootfs.mount: Deactivated successfully. Dec 13 01:30:01.454939 kubelet[3168]: I1213 01:30:01.454892 3168 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:30:01.587688 kubelet[3168]: I1213 01:30:01.578272 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3ab96e8-d944-493f-9479-dccde4369fe1-config-volume\") pod \"coredns-6f6b679f8f-pvjm5\" (UID: \"d3ab96e8-d944-493f-9479-dccde4369fe1\") " pod="kube-system/coredns-6f6b679f8f-pvjm5" Dec 13 01:30:01.587688 kubelet[3168]: I1213 01:30:01.578315 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c45531f-b3b4-4928-a6d3-7b32cfab7875-tigera-ca-bundle\") pod \"calico-kube-controllers-696589d6dc-8hhq2\" (UID: \"5c45531f-b3b4-4928-a6d3-7b32cfab7875\") " pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" Dec 13 01:30:01.587688 kubelet[3168]: I1213 01:30:01.578335 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6qv\" (UniqueName: \"kubernetes.io/projected/5c45531f-b3b4-4928-a6d3-7b32cfab7875-kube-api-access-fd6qv\") pod \"calico-kube-controllers-696589d6dc-8hhq2\" (UID: \"5c45531f-b3b4-4928-a6d3-7b32cfab7875\") " pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" Dec 13 01:30:01.587688 kubelet[3168]: I1213 01:30:01.578355 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nng92\" (UniqueName: \"kubernetes.io/projected/820af745-97f5-43f0-a795-d962b8d83e56-kube-api-access-nng92\") pod \"coredns-6f6b679f8f-clhzm\" (UID: \"820af745-97f5-43f0-a795-d962b8d83e56\") " pod="kube-system/coredns-6f6b679f8f-clhzm" Dec 13 01:30:01.587688 kubelet[3168]: I1213 01:30:01.578373 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbn4n\" (UniqueName: \"kubernetes.io/projected/e7f7e2e0-1463-4f9b-9827-a422072878d0-kube-api-access-kbn4n\") pod \"calico-apiserver-658975bcf4-wgcm5\" (UID: \"e7f7e2e0-1463-4f9b-9827-a422072878d0\") " pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" Dec 13 01:30:01.496884 systemd[1]: Created slice kubepods-burstable-podd3ab96e8_d944_493f_9479_dccde4369fe1.slice - libcontainer container kubepods-burstable-podd3ab96e8_d944_493f_9479_dccde4369fe1.slice. Dec 13 01:30:01.587942 kubelet[3168]: I1213 01:30:01.578392 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxpz\" (UniqueName: \"kubernetes.io/projected/d3ab96e8-d944-493f-9479-dccde4369fe1-kube-api-access-7zxpz\") pod \"coredns-6f6b679f8f-pvjm5\" (UID: \"d3ab96e8-d944-493f-9479-dccde4369fe1\") " pod="kube-system/coredns-6f6b679f8f-pvjm5" Dec 13 01:30:01.587942 kubelet[3168]: I1213 01:30:01.578409 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/820af745-97f5-43f0-a795-d962b8d83e56-config-volume\") pod \"coredns-6f6b679f8f-clhzm\" (UID: \"820af745-97f5-43f0-a795-d962b8d83e56\") " pod="kube-system/coredns-6f6b679f8f-clhzm" Dec 13 01:30:01.587942 kubelet[3168]: I1213 01:30:01.578425 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7f7e2e0-1463-4f9b-9827-a422072878d0-calico-apiserver-certs\") pod \"calico-apiserver-658975bcf4-wgcm5\" (UID: \"e7f7e2e0-1463-4f9b-9827-a422072878d0\") " pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" Dec 13 01:30:01.587942 kubelet[3168]: I1213 01:30:01.578443 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5e3c1dd-da7e-42cd-b3e9-c3b3953719af-calico-apiserver-certs\") pod \"calico-apiserver-658975bcf4-zrhvc\" (UID: \"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af\") " pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" Dec 13 01:30:01.587942 kubelet[3168]: I1213 01:30:01.578461 3168 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlnp\" (UniqueName: \"kubernetes.io/projected/c5e3c1dd-da7e-42cd-b3e9-c3b3953719af-kube-api-access-7nlnp\") pod \"calico-apiserver-658975bcf4-zrhvc\" (UID: \"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af\") " pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" Dec 13 01:30:01.509320 systemd[1]: Created slice kubepods-burstable-pod820af745_97f5_43f0_a795_d962b8d83e56.slice - libcontainer container kubepods-burstable-pod820af745_97f5_43f0_a795_d962b8d83e56.slice. Dec 13 01:30:01.516000 systemd[1]: Created slice kubepods-besteffort-pod5c45531f_b3b4_4928_a6d3_7b32cfab7875.slice - libcontainer container kubepods-besteffort-pod5c45531f_b3b4_4928_a6d3_7b32cfab7875.slice. Dec 13 01:30:01.526201 systemd[1]: Created slice kubepods-besteffort-podc5e3c1dd_da7e_42cd_b3e9_c3b3953719af.slice - libcontainer container kubepods-besteffort-podc5e3c1dd_da7e_42cd_b3e9_c3b3953719af.slice. Dec 13 01:30:01.533519 systemd[1]: Created slice kubepods-besteffort-pode7f7e2e0_1463_4f9b_9827_a422072878d0.slice - libcontainer container kubepods-besteffort-pode7f7e2e0_1463_4f9b_9827_a422072878d0.slice. Dec 13 01:30:01.889740 containerd[1711]: time="2024-12-13T01:30:01.889212561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvjm5,Uid:d3ab96e8-d944-493f-9479-dccde4369fe1,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:01.890081 containerd[1711]: time="2024-12-13T01:30:01.889829363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-zrhvc,Uid:c5e3c1dd-da7e-42cd-b3e9-c3b3953719af,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:30:01.892009 containerd[1711]: time="2024-12-13T01:30:01.891819647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-clhzm,Uid:820af745-97f5-43f0-a795-d962b8d83e56,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:01.892009 containerd[1711]: time="2024-12-13T01:30:01.891932207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-wgcm5,Uid:e7f7e2e0-1463-4f9b-9827-a422072878d0,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:30:01.892727 containerd[1711]: time="2024-12-13T01:30:01.892695288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696589d6dc-8hhq2,Uid:5c45531f-b3b4-4928-a6d3-7b32cfab7875,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:01.896355 systemd[1]: Created slice kubepods-besteffort-poda987f675_3896_4490_b719_7c769af12cf2.slice - libcontainer container kubepods-besteffort-poda987f675_3896_4490_b719_7c769af12cf2.slice. Dec 13 01:30:01.898753 containerd[1711]: time="2024-12-13T01:30:01.898677221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kq89,Uid:a987f675-3896-4490-b719-7c769af12cf2,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:02.106027 containerd[1711]: time="2024-12-13T01:30:02.105967846Z" level=info msg="shim disconnected" id=6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9 namespace=k8s.io Dec 13 01:30:02.106027 containerd[1711]: time="2024-12-13T01:30:02.106019406Z" level=warning msg="cleaning up after shim disconnected" id=6ebf47a697c8adde0062776d55ad2d86fbbd46ceecac5dfc79b557842a6febf9 namespace=k8s.io Dec 13 01:30:02.106027 containerd[1711]: time="2024-12-13T01:30:02.106028006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:02.370155 containerd[1711]: time="2024-12-13T01:30:02.369674627Z" level=error msg="Failed to destroy network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.370155 containerd[1711]: time="2024-12-13T01:30:02.370004228Z" level=error msg="encountered an error cleaning up failed sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.370155 containerd[1711]: time="2024-12-13T01:30:02.370093988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696589d6dc-8hhq2,Uid:5c45531f-b3b4-4928-a6d3-7b32cfab7875,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.370362 containerd[1711]: time="2024-12-13T01:30:02.370247708Z" level=error msg="Failed to destroy network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.372569 containerd[1711]: time="2024-12-13T01:30:02.370521949Z" level=error msg="encountered an error cleaning up failed sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.372569 containerd[1711]: time="2024-12-13T01:30:02.370580549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvjm5,Uid:d3ab96e8-d944-493f-9479-dccde4369fe1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.372715 kubelet[3168]: E1213 01:30:02.370922 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.372715 kubelet[3168]: E1213 01:30:02.370993 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pvjm5" Dec 13 01:30:02.372715 kubelet[3168]: E1213 01:30:02.371010 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pvjm5" Dec 13 01:30:02.372793 kubelet[3168]: E1213 01:30:02.371048 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pvjm5_kube-system(d3ab96e8-d944-493f-9479-dccde4369fe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pvjm5_kube-system(d3ab96e8-d944-493f-9479-dccde4369fe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pvjm5" podUID="d3ab96e8-d944-493f-9479-dccde4369fe1" Dec 13 01:30:02.372793 kubelet[3168]: E1213 01:30:02.371090 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.372793 kubelet[3168]: E1213 01:30:02.371107 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" Dec 13 01:30:02.372883 kubelet[3168]: E1213 01:30:02.371120 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" Dec 13 01:30:02.372883 kubelet[3168]: E1213 01:30:02.371140 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-696589d6dc-8hhq2_calico-system(5c45531f-b3b4-4928-a6d3-7b32cfab7875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-696589d6dc-8hhq2_calico-system(5c45531f-b3b4-4928-a6d3-7b32cfab7875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" podUID="5c45531f-b3b4-4928-a6d3-7b32cfab7875" Dec 13 01:30:02.424871 containerd[1711]: time="2024-12-13T01:30:02.424815620Z" level=error msg="Failed to destroy network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.425205 containerd[1711]: time="2024-12-13T01:30:02.425162501Z" level=error msg="encountered an error cleaning up failed sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.425250 containerd[1711]: time="2024-12-13T01:30:02.425224301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kq89,Uid:a987f675-3896-4490-b719-7c769af12cf2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.426963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13-shm.mount: Deactivated successfully. Dec 13 01:30:02.427725 containerd[1711]: time="2024-12-13T01:30:02.427560026Z" level=error msg="Failed to destroy network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.431473 kubelet[3168]: E1213 01:30:02.429657 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.431473 kubelet[3168]: E1213 01:30:02.429739 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kq89" Dec 13 01:30:02.431473 kubelet[3168]: E1213 01:30:02.429762 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kq89" Dec 13 01:30:02.431421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d-shm.mount: Deactivated successfully. Dec 13 01:30:02.431689 containerd[1711]: time="2024-12-13T01:30:02.430689792Z" level=error msg="encountered an error cleaning up failed sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.431689 containerd[1711]: time="2024-12-13T01:30:02.431573474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-clhzm,Uid:820af745-97f5-43f0-a795-d962b8d83e56,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.431738 kubelet[3168]: E1213 01:30:02.429906 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kq89_calico-system(a987f675-3896-4490-b719-7c769af12cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kq89_calico-system(a987f675-3896-4490-b719-7c769af12cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:30:02.433333 kubelet[3168]: E1213 01:30:02.432421 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.433333 kubelet[3168]: E1213 01:30:02.432482 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-clhzm" Dec 13 01:30:02.433333 kubelet[3168]: E1213 01:30:02.432527 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-clhzm" Dec 13 01:30:02.433481 kubelet[3168]: E1213 01:30:02.432564 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-clhzm_kube-system(820af745-97f5-43f0-a795-d962b8d83e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-clhzm_kube-system(820af745-97f5-43f0-a795-d962b8d83e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-clhzm" podUID="820af745-97f5-43f0-a795-d962b8d83e56" Dec 13 01:30:02.439907 containerd[1711]: time="2024-12-13T01:30:02.439803371Z" level=error msg="Failed to destroy network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.440303 containerd[1711]: time="2024-12-13T01:30:02.440144372Z" level=error msg="encountered an error cleaning up failed sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.440303 containerd[1711]: time="2024-12-13T01:30:02.440190132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-zrhvc,Uid:c5e3c1dd-da7e-42cd-b3e9-c3b3953719af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.442881 kubelet[3168]: E1213 01:30:02.442565 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.442881 kubelet[3168]: E1213 01:30:02.442638 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" Dec 13 01:30:02.442881 kubelet[3168]: E1213 01:30:02.442657 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" Dec 13 01:30:02.443032 kubelet[3168]: E1213 01:30:02.442694 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-658975bcf4-zrhvc_calico-apiserver(c5e3c1dd-da7e-42cd-b3e9-c3b3953719af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-658975bcf4-zrhvc_calico-apiserver(c5e3c1dd-da7e-42cd-b3e9-c3b3953719af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" podUID="c5e3c1dd-da7e-42cd-b3e9-c3b3953719af" Dec 13 01:30:02.443825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b-shm.mount: Deactivated successfully. Dec 13 01:30:02.445859 containerd[1711]: time="2024-12-13T01:30:02.445814343Z" level=error msg="Failed to destroy network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.446178 containerd[1711]: time="2024-12-13T01:30:02.446153744Z" level=error msg="encountered an error cleaning up failed sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.446298 containerd[1711]: time="2024-12-13T01:30:02.446277584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-wgcm5,Uid:e7f7e2e0-1463-4f9b-9827-a422072878d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.446690 kubelet[3168]: E1213 01:30:02.446569 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:02.446690 kubelet[3168]: E1213 01:30:02.446605 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" Dec 13 01:30:02.446690 kubelet[3168]: E1213 01:30:02.446621 3168 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" Dec 13 01:30:02.446811 kubelet[3168]: E1213 01:30:02.446647 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-658975bcf4-wgcm5_calico-apiserver(e7f7e2e0-1463-4f9b-9827-a422072878d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-658975bcf4-wgcm5_calico-apiserver(e7f7e2e0-1463-4f9b-9827-a422072878d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" podUID="e7f7e2e0-1463-4f9b-9827-a422072878d0" Dec 13 01:30:02.447607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f-shm.mount: Deactivated successfully. Dec 13 01:30:02.987351 kubelet[3168]: I1213 01:30:02.986950 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:02.989003 containerd[1711]: time="2024-12-13T01:30:02.987735335Z" level=info msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" Dec 13 01:30:02.989003 containerd[1711]: time="2024-12-13T01:30:02.987973575Z" level=info msg="Ensure that sandbox 720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b in task-service has been cleanup successfully" Dec 13 01:30:02.989609 kubelet[3168]: I1213 01:30:02.989435 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:02.990189 containerd[1711]: time="2024-12-13T01:30:02.989864619Z" level=info msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" Dec 13 01:30:02.990189 containerd[1711]: time="2024-12-13T01:30:02.990000659Z" level=info msg="Ensure that sandbox ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b in task-service has been cleanup successfully" Dec 13 01:30:02.991565 kubelet[3168]: I1213 01:30:02.991533 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:02.992218 containerd[1711]: time="2024-12-13T01:30:02.992187384Z" level=info msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" Dec 13 01:30:02.993150 containerd[1711]: time="2024-12-13T01:30:02.992953786Z" level=info msg="Ensure that sandbox d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13 in task-service has been cleanup successfully" Dec 13 01:30:02.995985 kubelet[3168]: I1213 01:30:02.995958 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:02.997160 containerd[1711]: time="2024-12-13T01:30:02.996956434Z" level=info msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" Dec 13 01:30:02.997948 containerd[1711]: time="2024-12-13T01:30:02.997904796Z" level=info msg="Ensure that sandbox c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d in task-service has been cleanup successfully" Dec 13 01:30:03.008090 containerd[1711]: time="2024-12-13T01:30:03.008046097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:30:03.010901 kubelet[3168]: I1213 01:30:03.010802 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:03.018993 containerd[1711]: time="2024-12-13T01:30:03.018572318Z" level=info msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" Dec 13 01:30:03.018993 containerd[1711]: time="2024-12-13T01:30:03.018787439Z" level=info msg="Ensure that sandbox e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9 in task-service has been cleanup successfully" Dec 13 01:30:03.028714 kubelet[3168]: I1213 01:30:03.028591 3168 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:03.031737 containerd[1711]: time="2024-12-13T01:30:03.031485705Z" level=info msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" Dec 13 01:30:03.035957 containerd[1711]: time="2024-12-13T01:30:03.034324230Z" level=info msg="Ensure that sandbox c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f in task-service has been cleanup successfully" Dec 13 01:30:03.077802 containerd[1711]: time="2024-12-13T01:30:03.077668999Z" level=error msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" failed" error="failed to destroy network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.078282 kubelet[3168]: E1213 01:30:03.078131 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:03.078793 kubelet[3168]: E1213 01:30:03.078221 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b"} Dec 13 01:30:03.078793 kubelet[3168]: E1213 01:30:03.078647 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3ab96e8-d944-493f-9479-dccde4369fe1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.078793 kubelet[3168]: E1213 01:30:03.078676 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3ab96e8-d944-493f-9479-dccde4369fe1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pvjm5" podUID="d3ab96e8-d944-493f-9479-dccde4369fe1" Dec 13 01:30:03.084381 containerd[1711]: time="2024-12-13T01:30:03.084155093Z" level=error msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" failed" error="failed to destroy network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.084882 kubelet[3168]: E1213 01:30:03.084598 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:03.084882 kubelet[3168]: E1213 01:30:03.084648 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b"} Dec 13 01:30:03.084882 kubelet[3168]: E1213 01:30:03.084680 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.084882 kubelet[3168]: E1213 01:30:03.084709 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" podUID="c5e3c1dd-da7e-42cd-b3e9-c3b3953719af" Dec 13 01:30:03.086017 containerd[1711]: time="2024-12-13T01:30:03.085955616Z" level=error msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" failed" error="failed to destroy network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.086163 kubelet[3168]: E1213 01:30:03.086134 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:03.086308 kubelet[3168]: E1213 01:30:03.086277 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9"} Dec 13 01:30:03.086348 kubelet[3168]: E1213 01:30:03.086310 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c45531f-b3b4-4928-a6d3-7b32cfab7875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.087522 kubelet[3168]: E1213 01:30:03.086427 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c45531f-b3b4-4928-a6d3-7b32cfab7875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" podUID="5c45531f-b3b4-4928-a6d3-7b32cfab7875" Dec 13 01:30:03.093657 containerd[1711]: time="2024-12-13T01:30:03.093614512Z" level=error msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" failed" error="failed to destroy network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.093791 kubelet[3168]: E1213 01:30:03.093759 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:03.093835 kubelet[3168]: E1213 01:30:03.093795 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13"} Dec 13 01:30:03.093835 kubelet[3168]: E1213 01:30:03.093819 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a987f675-3896-4490-b719-7c769af12cf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.093950 kubelet[3168]: E1213 01:30:03.093840 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a987f675-3896-4490-b719-7c769af12cf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kq89" podUID="a987f675-3896-4490-b719-7c769af12cf2" Dec 13 01:30:03.097033 containerd[1711]: time="2024-12-13T01:30:03.096980319Z" level=error msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" failed" error="failed to destroy network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.097353 kubelet[3168]: E1213 01:30:03.097151 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:03.097353 kubelet[3168]: E1213 01:30:03.097198 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d"} Dec 13 01:30:03.097353 kubelet[3168]: E1213 01:30:03.097224 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"820af745-97f5-43f0-a795-d962b8d83e56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.097353 kubelet[3168]: E1213 01:30:03.097241 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"820af745-97f5-43f0-a795-d962b8d83e56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-clhzm" podUID="820af745-97f5-43f0-a795-d962b8d83e56" Dec 13 01:30:03.108152 containerd[1711]: time="2024-12-13T01:30:03.108063062Z" level=error msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" failed" error="failed to destroy network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:03.108320 kubelet[3168]: E1213 01:30:03.108264 3168 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:03.108320 kubelet[3168]: E1213 01:30:03.108298 3168 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f"} Dec 13 01:30:03.108417 kubelet[3168]: E1213 01:30:03.108323 3168 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7f7e2e0-1463-4f9b-9827-a422072878d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:03.108417 kubelet[3168]: E1213 01:30:03.108343 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7f7e2e0-1463-4f9b-9827-a422072878d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" podUID="e7f7e2e0-1463-4f9b-9827-a422072878d0" Dec 13 01:30:09.690286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832929026.mount: Deactivated successfully. Dec 13 01:30:10.077725 containerd[1711]: time="2024-12-13T01:30:10.076918787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.080200 containerd[1711]: time="2024-12-13T01:30:10.080167555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:30:10.083321 containerd[1711]: time="2024-12-13T01:30:10.083272001Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.088182 containerd[1711]: time="2024-12-13T01:30:10.088152292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.089121 containerd[1711]: time="2024-12-13T01:30:10.088637693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.080539476s" Dec 13 01:30:10.089121 containerd[1711]: time="2024-12-13T01:30:10.088674813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:30:10.100211 containerd[1711]: time="2024-12-13T01:30:10.100174079Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:30:10.148378 containerd[1711]: time="2024-12-13T01:30:10.148323426Z" level=info msg="CreateContainer within sandbox \"09d68bc15bc18078a7a10da2082da9a96130af375b7c001cd990ff8f1785be66\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160\"" Dec 13 01:30:10.149913 containerd[1711]: time="2024-12-13T01:30:10.148976147Z" level=info msg="StartContainer for \"65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160\"" Dec 13 01:30:10.172637 systemd[1]: Started cri-containerd-65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160.scope - libcontainer container 65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160. Dec 13 01:30:10.202831 containerd[1711]: time="2024-12-13T01:30:10.202780707Z" level=info msg="StartContainer for \"65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160\" returns successfully" Dec 13 01:30:10.300481 kubelet[3168]: I1213 01:30:10.300291 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:10.351425 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:30:10.351555 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:30:11.062516 kubelet[3168]: I1213 01:30:11.062080 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kz488" podStartSLOduration=1.506925436 podStartE2EDuration="19.062060692s" podCreationTimestamp="2024-12-13 01:29:52 +0000 UTC" firstStartedPulling="2024-12-13 01:29:52.534240879 +0000 UTC m=+17.751639605" lastFinishedPulling="2024-12-13 01:30:10.089376135 +0000 UTC m=+35.306774861" observedRunningTime="2024-12-13 01:30:11.06094469 +0000 UTC m=+36.278343416" watchObservedRunningTime="2024-12-13 01:30:11.062060692 +0000 UTC m=+36.279459418" Dec 13 01:30:11.854524 kernel: bpftool[4356]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:30:12.122650 systemd-networkd[1329]: vxlan.calico: Link UP Dec 13 01:30:12.122659 systemd-networkd[1329]: vxlan.calico: Gained carrier Dec 13 01:30:13.833625 systemd-networkd[1329]: vxlan.calico: Gained IPv6LL Dec 13 01:30:13.890715 containerd[1711]: time="2024-12-13T01:30:13.890320646Z" level=info msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" Dec 13 01:30:13.892063 containerd[1711]: time="2024-12-13T01:30:13.890386126Z" level=info msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" Dec 13 01:30:13.892776 containerd[1711]: time="2024-12-13T01:30:13.890412486Z" level=info msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.984 [INFO][4492] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.984 [INFO][4492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" iface="eth0" netns="/var/run/netns/cni-b84b4cab-e466-ddfb-b3f1-ec5555a55fcc" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.984 [INFO][4492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" iface="eth0" netns="/var/run/netns/cni-b84b4cab-e466-ddfb-b3f1-ec5555a55fcc" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" iface="eth0" netns="/var/run/netns/cni-b84b4cab-e466-ddfb-b3f1-ec5555a55fcc" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4492] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.028 [INFO][4507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.028 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.028 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.038 [WARNING][4507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.038 [INFO][4507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.040 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:14.045084 containerd[1711]: 2024-12-13 01:30:14.042 [INFO][4492] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:14.045084 containerd[1711]: time="2024-12-13T01:30:14.044600228Z" level=info msg="TearDown network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" successfully" Dec 13 01:30:14.045084 containerd[1711]: time="2024-12-13T01:30:14.044645708Z" level=info msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" returns successfully" Dec 13 01:30:14.049659 systemd[1]: run-netns-cni\x2db84b4cab\x2de466\x2dddfb\x2db3f1\x2dec5555a55fcc.mount: Deactivated successfully. Dec 13 01:30:14.051260 containerd[1711]: time="2024-12-13T01:30:14.051160522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kq89,Uid:a987f675-3896-4490-b719-7c769af12cf2,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.978 [INFO][4481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.980 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" iface="eth0" netns="/var/run/netns/cni-f9938468-2f61-5556-fbad-6f226a521fc2" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.980 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" iface="eth0" netns="/var/run/netns/cni-f9938468-2f61-5556-fbad-6f226a521fc2" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.980 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" iface="eth0" netns="/var/run/netns/cni-f9938468-2f61-5556-fbad-6f226a521fc2" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.981 [INFO][4481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:13.981 [INFO][4481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.030 [INFO][4502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.030 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.040 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.059 [WARNING][4502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.059 [INFO][4502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.064 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:14.074523 containerd[1711]: 2024-12-13 01:30:14.070 [INFO][4481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:14.076999 containerd[1711]: time="2024-12-13T01:30:14.076607179Z" level=info msg="TearDown network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" successfully" Dec 13 01:30:14.076999 containerd[1711]: time="2024-12-13T01:30:14.076647259Z" level=info msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" returns successfully" Dec 13 01:30:14.079725 containerd[1711]: time="2024-12-13T01:30:14.079431145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvjm5,Uid:d3ab96e8-d944-493f-9479-dccde4369fe1,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:14.079951 systemd[1]: run-netns-cni\x2df9938468\x2d2f61\x2d5556\x2dfbad\x2d6f226a521fc2.mount: Deactivated successfully. Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.985 [INFO][4485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.986 [INFO][4485] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" iface="eth0" netns="/var/run/netns/cni-ba4138d5-64ba-17c5-6b7f-1849a634845c" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.987 [INFO][4485] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" iface="eth0" netns="/var/run/netns/cni-ba4138d5-64ba-17c5-6b7f-1849a634845c" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4485] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" iface="eth0" netns="/var/run/netns/cni-ba4138d5-64ba-17c5-6b7f-1849a634845c" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:13.988 [INFO][4485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.030 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.031 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.064 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.079 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.079 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.082 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:14.086270 containerd[1711]: 2024-12-13 01:30:14.084 [INFO][4485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:14.089627 containerd[1711]: time="2024-12-13T01:30:14.086395761Z" level=info msg="TearDown network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" successfully" Dec 13 01:30:14.089627 containerd[1711]: time="2024-12-13T01:30:14.086422361Z" level=info msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" returns successfully" Dec 13 01:30:14.089627 containerd[1711]: time="2024-12-13T01:30:14.088135524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696589d6dc-8hhq2,Uid:5c45531f-b3b4-4928-a6d3-7b32cfab7875,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:14.089720 systemd[1]: run-netns-cni\x2dba4138d5\x2d64ba\x2d17c5\x2d6b7f\x2d1849a634845c.mount: Deactivated successfully. Dec 13 01:30:14.891099 containerd[1711]: time="2024-12-13T01:30:14.890779265Z" level=info msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.935 [INFO][4533] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.936 [INFO][4533] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" iface="eth0" netns="/var/run/netns/cni-9e7dda30-503c-da6e-8926-95cf9ff3977f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.936 [INFO][4533] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" iface="eth0" netns="/var/run/netns/cni-9e7dda30-503c-da6e-8926-95cf9ff3977f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.937 [INFO][4533] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" iface="eth0" netns="/var/run/netns/cni-9e7dda30-503c-da6e-8926-95cf9ff3977f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.937 [INFO][4533] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.937 [INFO][4533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.954 [INFO][4541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.954 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.954 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.962 [WARNING][4541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.962 [INFO][4541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.963 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:14.967158 containerd[1711]: 2024-12-13 01:30:14.965 [INFO][4533] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:14.967655 containerd[1711]: time="2024-12-13T01:30:14.967389995Z" level=info msg="TearDown network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" successfully" Dec 13 01:30:14.967655 containerd[1711]: time="2024-12-13T01:30:14.967449035Z" level=info msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" returns successfully" Dec 13 01:30:14.971027 containerd[1711]: time="2024-12-13T01:30:14.969080678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-wgcm5,Uid:e7f7e2e0-1463-4f9b-9827-a422072878d0,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:14.970421 systemd[1]: run-netns-cni\x2d9e7dda30\x2d503c\x2dda6e\x2d8926\x2d95cf9ff3977f.mount: Deactivated successfully. Dec 13 01:30:15.387930 systemd-networkd[1329]: cali86486306892: Link UP Dec 13 01:30:15.388709 systemd-networkd[1329]: cali86486306892: Gained carrier Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.240 [INFO][4547] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0 csi-node-driver- calico-system a987f675-3896-4490-b719-7c769af12cf2 733 0 2024-12-13 01:29:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 csi-node-driver-9kq89 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali86486306892 [] []}} ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.240 [INFO][4547] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.320 [INFO][4588] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" HandleID="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.337 [INFO][4588] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" HandleID="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"csi-node-driver-9kq89", "timestamp":"2024-12-13 01:30:15.320441538 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.337 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.337 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.337 [INFO][4588] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.338 [INFO][4588] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.343 [INFO][4588] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.350 [INFO][4588] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.352 [INFO][4588] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.356 [INFO][4588] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.356 [INFO][4588] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.360 [INFO][4588] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637 Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.365 [INFO][4588] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.379 [INFO][4588] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.193/26] block=192.168.98.192/26 handle="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.379 [INFO][4588] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.193/26] handle="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.379 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:15.415255 containerd[1711]: 2024-12-13 01:30:15.379 [INFO][4588] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.193/26] IPv6=[] ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" HandleID="k8s-pod-network.43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.384 [INFO][4547] cni-plugin/k8s.go 386: Populated endpoint ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987f675-3896-4490-b719-7c769af12cf2", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"csi-node-driver-9kq89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86486306892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.384 [INFO][4547] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.193/32] ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.385 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86486306892 ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.389 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.391 [INFO][4547] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987f675-3896-4490-b719-7c769af12cf2", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637", Pod:"csi-node-driver-9kq89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86486306892", MAC:"5a:2f:a1:83:aa:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.415813 containerd[1711]: 2024-12-13 01:30:15.412 [INFO][4547] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637" Namespace="calico-system" Pod="csi-node-driver-9kq89" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:15.451848 containerd[1711]: time="2024-12-13T01:30:15.451655149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:15.452597 containerd[1711]: time="2024-12-13T01:30:15.451824789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:15.452597 containerd[1711]: time="2024-12-13T01:30:15.451986629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.452597 containerd[1711]: time="2024-12-13T01:30:15.452531151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.480726 systemd[1]: Started cri-containerd-43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637.scope - libcontainer container 43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637. Dec 13 01:30:15.501699 systemd-networkd[1329]: cali56b84286dd8: Link UP Dec 13 01:30:15.504107 systemd-networkd[1329]: cali56b84286dd8: Gained carrier Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.301 [INFO][4566] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0 coredns-6f6b679f8f- kube-system d3ab96e8-d944-493f-9479-dccde4369fe1 731 0 2024-12-13 01:29:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 coredns-6f6b679f8f-pvjm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56b84286dd8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.301 [INFO][4566] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.375 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" HandleID="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.439 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" HandleID="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003178e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"coredns-6f6b679f8f-pvjm5", "timestamp":"2024-12-13 01:30:15.37573758 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.439 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.439 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.439 [INFO][4603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.442 [INFO][4603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.447 [INFO][4603] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.451 [INFO][4603] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.453 [INFO][4603] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.456 [INFO][4603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.456 [INFO][4603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.459 [INFO][4603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0 Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.471 [INFO][4603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.490 [INFO][4603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.194/26] block=192.168.98.192/26 handle="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.490 [INFO][4603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.194/26] handle="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.491 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:15.544324 containerd[1711]: 2024-12-13 01:30:15.491 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.194/26] IPv6=[] ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" HandleID="k8s-pod-network.022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.496 [INFO][4566] cni-plugin/k8s.go 386: Populated endpoint ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d3ab96e8-d944-493f-9479-dccde4369fe1", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"coredns-6f6b679f8f-pvjm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56b84286dd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.496 [INFO][4566] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.194/32] ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.496 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56b84286dd8 ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.503 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.506 [INFO][4566] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d3ab96e8-d944-493f-9479-dccde4369fe1", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0", Pod:"coredns-6f6b679f8f-pvjm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56b84286dd8", MAC:"b6:5c:76:fd:73:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.544932 containerd[1711]: 2024-12-13 01:30:15.543 [INFO][4566] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pvjm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:15.585351 containerd[1711]: time="2024-12-13T01:30:15.585231245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:15.585780 containerd[1711]: time="2024-12-13T01:30:15.585733726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:15.585913 containerd[1711]: time="2024-12-13T01:30:15.585885326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.586364 containerd[1711]: time="2024-12-13T01:30:15.586274887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.607963 systemd-networkd[1329]: cali791c15622ad: Link UP Dec 13 01:30:15.612705 systemd-networkd[1329]: cali791c15622ad: Gained carrier Dec 13 01:30:15.620674 systemd[1]: Started cri-containerd-022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0.scope - libcontainer container 022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0. Dec 13 01:30:15.651401 containerd[1711]: time="2024-12-13T01:30:15.651334352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kq89,Uid:a987f675-3896-4490-b719-7c769af12cf2,Namespace:calico-system,Attempt:1,} returns sandbox id \"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637\"" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.300 [INFO][4558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0 calico-kube-controllers-696589d6dc- calico-system 5c45531f-b3b4-4928-a6d3-7b32cfab7875 732 0 2024-12-13 01:29:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:696589d6dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 calico-kube-controllers-696589d6dc-8hhq2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali791c15622ad [] []}} ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.300 [INFO][4558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.383 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" HandleID="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.440 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" HandleID="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bac80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"calico-kube-controllers-696589d6dc-8hhq2", "timestamp":"2024-12-13 01:30:15.383601638 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.440 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.492 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.492 [INFO][4607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.547 [INFO][4607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.557 [INFO][4607] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.563 [INFO][4607] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.566 [INFO][4607] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.570 [INFO][4607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.570 [INFO][4607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.572 [INFO][4607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.584 [INFO][4607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.195/26] block=192.168.98.192/26 handle="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.195/26] handle="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:15.652295 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.195/26] IPv6=[] ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" HandleID="k8s-pod-network.082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.602 [INFO][4558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0", GenerateName:"calico-kube-controllers-696589d6dc-", Namespace:"calico-system", SelfLink:"", UID:"5c45531f-b3b4-4928-a6d3-7b32cfab7875", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696589d6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"calico-kube-controllers-696589d6dc-8hhq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791c15622ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.602 [INFO][4558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.195/32] ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.604 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali791c15622ad ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.614 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.618 [INFO][4558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0", GenerateName:"calico-kube-controllers-696589d6dc-", Namespace:"calico-system", SelfLink:"", UID:"5c45531f-b3b4-4928-a6d3-7b32cfab7875", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696589d6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac", Pod:"calico-kube-controllers-696589d6dc-8hhq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791c15622ad", MAC:"a2:a0:61:d6:f6:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.652881 containerd[1711]: 2024-12-13 01:30:15.644 [INFO][4558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac" Namespace="calico-system" Pod="calico-kube-controllers-696589d6dc-8hhq2" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:15.665887 containerd[1711]: time="2024-12-13T01:30:15.665844064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:30:15.706031 containerd[1711]: time="2024-12-13T01:30:15.705918193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvjm5,Uid:d3ab96e8-d944-493f-9479-dccde4369fe1,Namespace:kube-system,Attempt:1,} returns sandbox id \"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0\"" Dec 13 01:30:15.709892 systemd-networkd[1329]: calidaaf23f112a: Link UP Dec 13 01:30:15.710073 systemd-networkd[1329]: calidaaf23f112a: Gained carrier Dec 13 01:30:15.713831 containerd[1711]: time="2024-12-13T01:30:15.713687650Z" level=info msg="CreateContainer within sandbox \"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:15.728745 containerd[1711]: time="2024-12-13T01:30:15.724582074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:15.728745 containerd[1711]: time="2024-12-13T01:30:15.724677034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:15.728745 containerd[1711]: time="2024-12-13T01:30:15.724694074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.728745 containerd[1711]: time="2024-12-13T01:30:15.724784275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.743659 systemd[1]: Started cri-containerd-082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac.scope - libcontainer container 082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac. Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.296 [INFO][4579] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0 calico-apiserver-658975bcf4- calico-apiserver e7f7e2e0-1463-4f9b-9827-a422072878d0 738 0 2024-12-13 01:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:658975bcf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 calico-apiserver-658975bcf4-wgcm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaaf23f112a [] []}} ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.296 [INFO][4579] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.369 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" HandleID="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.442 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" HandleID="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"calico-apiserver-658975bcf4-wgcm5", "timestamp":"2024-12-13 01:30:15.369393646 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.442 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.596 [INFO][4601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.643 [INFO][4601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.660 [INFO][4601] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.669 [INFO][4601] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.672 [INFO][4601] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.678 [INFO][4601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.678 [INFO][4601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.680 [INFO][4601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2 Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.685 [INFO][4601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.701 [INFO][4601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.196/26] block=192.168.98.192/26 handle="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.701 [INFO][4601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.196/26] handle="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.701 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:15.748448 containerd[1711]: 2024-12-13 01:30:15.701 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.196/26] IPv6=[] ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" HandleID="k8s-pod-network.b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.705 [INFO][4579] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f7e2e0-1463-4f9b-9827-a422072878d0", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"calico-apiserver-658975bcf4-wgcm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaaf23f112a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.705 [INFO][4579] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.196/32] ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.705 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaaf23f112a ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.718 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.719 [INFO][4579] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f7e2e0-1463-4f9b-9827-a422072878d0", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2", Pod:"calico-apiserver-658975bcf4-wgcm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaaf23f112a", MAC:"e6:93:ee:f4:7f:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:15.749624 containerd[1711]: 2024-12-13 01:30:15.743 [INFO][4579] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-wgcm5" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:15.762975 containerd[1711]: time="2024-12-13T01:30:15.762925959Z" level=info msg="CreateContainer within sandbox \"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be3538d0665d52a95cfdc3fe819ced546dbbde6719570edc1972f733b6c2e1c9\"" Dec 13 01:30:15.764546 containerd[1711]: time="2024-12-13T01:30:15.764280482Z" level=info msg="StartContainer for \"be3538d0665d52a95cfdc3fe819ced546dbbde6719570edc1972f733b6c2e1c9\"" Dec 13 01:30:15.788555 containerd[1711]: time="2024-12-13T01:30:15.788414376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:15.788555 containerd[1711]: time="2024-12-13T01:30:15.788472576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:15.788555 containerd[1711]: time="2024-12-13T01:30:15.788501576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.789253 containerd[1711]: time="2024-12-13T01:30:15.789205177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:15.809968 systemd[1]: Started cri-containerd-be3538d0665d52a95cfdc3fe819ced546dbbde6719570edc1972f733b6c2e1c9.scope - libcontainer container be3538d0665d52a95cfdc3fe819ced546dbbde6719570edc1972f733b6c2e1c9. Dec 13 01:30:15.812723 containerd[1711]: time="2024-12-13T01:30:15.812693909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696589d6dc-8hhq2,Uid:5c45531f-b3b4-4928-a6d3-7b32cfab7875,Namespace:calico-system,Attempt:1,} returns sandbox id \"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac\"" Dec 13 01:30:15.820674 systemd[1]: Started cri-containerd-b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2.scope - libcontainer container b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2. Dec 13 01:30:15.847541 containerd[1711]: time="2024-12-13T01:30:15.847484867Z" level=info msg="StartContainer for \"be3538d0665d52a95cfdc3fe819ced546dbbde6719570edc1972f733b6c2e1c9\" returns successfully" Dec 13 01:30:15.871198 containerd[1711]: time="2024-12-13T01:30:15.871161319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-wgcm5,Uid:e7f7e2e0-1463-4f9b-9827-a422072878d0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2\"" Dec 13 01:30:16.076241 kubelet[3168]: I1213 01:30:16.074154 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pvjm5" podStartSLOduration=35.074139849 podStartE2EDuration="35.074139849s" podCreationTimestamp="2024-12-13 01:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:16.073813409 +0000 UTC m=+41.291212135" watchObservedRunningTime="2024-12-13 01:30:16.074139849 +0000 UTC m=+41.291538575" Dec 13 01:30:16.714224 systemd-networkd[1329]: cali56b84286dd8: Gained IPv6LL Dec 13 01:30:16.841649 systemd-networkd[1329]: cali791c15622ad: Gained IPv6LL Dec 13 01:30:16.852869 containerd[1711]: time="2024-12-13T01:30:16.852827121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:16.854809 containerd[1711]: time="2024-12-13T01:30:16.854769085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:30:16.859123 containerd[1711]: time="2024-12-13T01:30:16.859075694Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:16.863264 containerd[1711]: time="2024-12-13T01:30:16.863222862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:16.864024 containerd[1711]: time="2024-12-13T01:30:16.863924384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.19789428s" Dec 13 01:30:16.864024 containerd[1711]: time="2024-12-13T01:30:16.863953064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:30:16.865531 containerd[1711]: time="2024-12-13T01:30:16.865332707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:30:16.866866 containerd[1711]: time="2024-12-13T01:30:16.866839790Z" level=info msg="CreateContainer within sandbox \"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:30:16.892166 containerd[1711]: time="2024-12-13T01:30:16.890989239Z" level=info msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" Dec 13 01:30:16.898873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340035666.mount: Deactivated successfully. Dec 13 01:30:16.908762 containerd[1711]: time="2024-12-13T01:30:16.908595234Z" level=info msg="CreateContainer within sandbox \"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1c075f8533586c9a2b5deb855a2d7075aa6482b97e3d3a2ec9fea9149cc277e6\"" Dec 13 01:30:16.909648 containerd[1711]: time="2024-12-13T01:30:16.909532916Z" level=info msg="StartContainer for \"1c075f8533586c9a2b5deb855a2d7075aa6482b97e3d3a2ec9fea9149cc277e6\"" Dec 13 01:30:16.968203 systemd[1]: Started cri-containerd-1c075f8533586c9a2b5deb855a2d7075aa6482b97e3d3a2ec9fea9149cc277e6.scope - libcontainer container 1c075f8533586c9a2b5deb855a2d7075aa6482b97e3d3a2ec9fea9149cc277e6. Dec 13 01:30:17.016077 containerd[1711]: time="2024-12-13T01:30:17.016030212Z" level=info msg="StartContainer for \"1c075f8533586c9a2b5deb855a2d7075aa6482b97e3d3a2ec9fea9149cc277e6\" returns successfully" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.960 [INFO][4896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.961 [INFO][4896] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" iface="eth0" netns="/var/run/netns/cni-3692e64c-f27c-e500-b898-111bee736f0f" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.961 [INFO][4896] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" iface="eth0" netns="/var/run/netns/cni-3692e64c-f27c-e500-b898-111bee736f0f" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.961 [INFO][4896] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" iface="eth0" netns="/var/run/netns/cni-3692e64c-f27c-e500-b898-111bee736f0f" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.961 [INFO][4896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.961 [INFO][4896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.992 [INFO][4923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.992 [INFO][4923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:16.992 [INFO][4923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:17.002 [WARNING][4923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:17.013 [INFO][4923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:17.014 [INFO][4923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.019077 containerd[1711]: 2024-12-13 01:30:17.017 [INFO][4896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:17.019641 containerd[1711]: time="2024-12-13T01:30:17.019196138Z" level=info msg="TearDown network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" successfully" Dec 13 01:30:17.019641 containerd[1711]: time="2024-12-13T01:30:17.019217378Z" level=info msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" returns successfully" Dec 13 01:30:17.020479 containerd[1711]: time="2024-12-13T01:30:17.020360380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-zrhvc,Uid:c5e3c1dd-da7e-42cd-b3e9-c3b3953719af,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:17.097762 systemd-networkd[1329]: cali86486306892: Gained IPv6LL Dec 13 01:30:17.161654 systemd-networkd[1329]: calidaaf23f112a: Gained IPv6LL Dec 13 01:30:17.166609 systemd-networkd[1329]: cali1e2ecf75839: Link UP Dec 13 01:30:17.166793 systemd-networkd[1329]: cali1e2ecf75839: Gained carrier Dec 13 01:30:17.183922 systemd[1]: run-netns-cni\x2d3692e64c\x2df27c\x2de500\x2db898\x2d111bee736f0f.mount: Deactivated successfully. Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.098 [INFO][4947] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0 calico-apiserver-658975bcf4- calico-apiserver c5e3c1dd-da7e-42cd-b3e9-c3b3953719af 772 0 2024-12-13 01:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:658975bcf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 calico-apiserver-658975bcf4-zrhvc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e2ecf75839 [] []}} ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.099 [INFO][4947] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.121 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" HandleID="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.131 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" HandleID="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"calico-apiserver-658975bcf4-zrhvc", "timestamp":"2024-12-13 01:30:17.121776586 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.131 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.131 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.131 [INFO][4958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.133 [INFO][4958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.137 [INFO][4958] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.141 [INFO][4958] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.142 [INFO][4958] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.144 [INFO][4958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.144 [INFO][4958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.146 [INFO][4958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17 Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.150 [INFO][4958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.160 [INFO][4958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.197/26] block=192.168.98.192/26 handle="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.160 [INFO][4958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.197/26] handle="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.160 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.191371 containerd[1711]: 2024-12-13 01:30:17.160 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.197/26] IPv6=[] ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" HandleID="k8s-pod-network.51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.163 [INFO][4947] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"calico-apiserver-658975bcf4-zrhvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2ecf75839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.163 [INFO][4947] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.197/32] ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.163 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e2ecf75839 ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.166 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.167 [INFO][4947] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17", Pod:"calico-apiserver-658975bcf4-zrhvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2ecf75839", MAC:"66:7a:3f:bd:8d:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.192195 containerd[1711]: 2024-12-13 01:30:17.187 [INFO][4947] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17" Namespace="calico-apiserver" Pod="calico-apiserver-658975bcf4-zrhvc" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:17.216200 containerd[1711]: time="2024-12-13T01:30:17.215718816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:17.216200 containerd[1711]: time="2024-12-13T01:30:17.216160777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:17.216620 containerd[1711]: time="2024-12-13T01:30:17.216455297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:17.216869 containerd[1711]: time="2024-12-13T01:30:17.216809778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:17.241650 systemd[1]: Started cri-containerd-51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17.scope - libcontainer container 51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17. Dec 13 01:30:17.279437 containerd[1711]: time="2024-12-13T01:30:17.279396785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658975bcf4-zrhvc,Uid:c5e3c1dd-da7e-42cd-b3e9-c3b3953719af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17\"" Dec 13 01:30:17.889915 containerd[1711]: time="2024-12-13T01:30:17.889484380Z" level=info msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.936 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.936 [INFO][5033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" iface="eth0" netns="/var/run/netns/cni-d8d21e08-7b0f-8afc-6446-c8a68e716bb3" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.937 [INFO][5033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" iface="eth0" netns="/var/run/netns/cni-d8d21e08-7b0f-8afc-6446-c8a68e716bb3" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.937 [INFO][5033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" iface="eth0" netns="/var/run/netns/cni-d8d21e08-7b0f-8afc-6446-c8a68e716bb3" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.937 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.937 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.966 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.966 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.966 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.982 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.983 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.984 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.989581 containerd[1711]: 2024-12-13 01:30:17.986 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:17.989581 containerd[1711]: time="2024-12-13T01:30:17.988649660Z" level=info msg="TearDown network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" successfully" Dec 13 01:30:17.989581 containerd[1711]: time="2024-12-13T01:30:17.988679100Z" level=info msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" returns successfully" Dec 13 01:30:17.990505 systemd[1]: run-netns-cni\x2dd8d21e08\x2d7b0f\x2d8afc\x2d6446\x2dc8a68e716bb3.mount: Deactivated successfully. Dec 13 01:30:17.993605 containerd[1711]: time="2024-12-13T01:30:17.993164789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-clhzm,Uid:820af745-97f5-43f0-a795-d962b8d83e56,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:18.148878 systemd-networkd[1329]: cali315e384a0a1: Link UP Dec 13 01:30:18.150293 systemd-networkd[1329]: cali315e384a0a1: Gained carrier Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.062 [INFO][5045] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0 coredns-6f6b679f8f- kube-system 820af745-97f5-43f0-a795-d962b8d83e56 780 0 2024-12-13 01:29:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-ab3ee36414 coredns-6f6b679f8f-clhzm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali315e384a0a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.062 [INFO][5045] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.097 [INFO][5057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" HandleID="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.108 [INFO][5057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" HandleID="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003194f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-ab3ee36414", "pod":"coredns-6f6b679f8f-clhzm", "timestamp":"2024-12-13 01:30:18.09738904 +0000 UTC"}, Hostname:"ci-4081.2.1-a-ab3ee36414", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.108 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.108 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.108 [INFO][5057] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-ab3ee36414' Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.110 [INFO][5057] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.113 [INFO][5057] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.118 [INFO][5057] ipam/ipam.go 489: Trying affinity for 192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.120 [INFO][5057] ipam/ipam.go 155: Attempting to load block cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.122 [INFO][5057] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.192/26 host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.122 [INFO][5057] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.192/26 handle="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.123 [INFO][5057] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.128 [INFO][5057] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.98.192/26 handle="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.137 [INFO][5057] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.98.198/26] block=192.168.98.192/26 handle="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.137 [INFO][5057] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.198/26] handle="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" host="ci-4081.2.1-a-ab3ee36414" Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.137 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:18.178076 containerd[1711]: 2024-12-13 01:30:18.137 [INFO][5057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.198/26] IPv6=[] ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" HandleID="k8s-pod-network.cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.140 [INFO][5045] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"820af745-97f5-43f0-a795-d962b8d83e56", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"", Pod:"coredns-6f6b679f8f-clhzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali315e384a0a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.141 [INFO][5045] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.98.198/32] ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.141 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali315e384a0a1 ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.147 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.148 [INFO][5045] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"820af745-97f5-43f0-a795-d962b8d83e56", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e", Pod:"coredns-6f6b679f8f-clhzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali315e384a0a1", MAC:"62:df:6c:5e:b8:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:18.179001 containerd[1711]: 2024-12-13 01:30:18.175 [INFO][5045] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e" Namespace="kube-system" Pod="coredns-6f6b679f8f-clhzm" WorkloadEndpoint="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:18.209731 containerd[1711]: time="2024-12-13T01:30:18.209568547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:18.209731 containerd[1711]: time="2024-12-13T01:30:18.209625588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:18.209731 containerd[1711]: time="2024-12-13T01:30:18.209692268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:18.210185 containerd[1711]: time="2024-12-13T01:30:18.209952188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:18.230812 systemd[1]: Started cri-containerd-cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e.scope - libcontainer container cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e. Dec 13 01:30:18.267666 containerd[1711]: time="2024-12-13T01:30:18.267622905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-clhzm,Uid:820af745-97f5-43f0-a795-d962b8d83e56,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e\"" Dec 13 01:30:18.271060 containerd[1711]: time="2024-12-13T01:30:18.270831471Z" level=info msg="CreateContainer within sandbox \"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:18.322613 containerd[1711]: time="2024-12-13T01:30:18.322475456Z" level=info msg="CreateContainer within sandbox \"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6b5cb708a9c36ae826fde007bc15e71d4e4f4c61d13b9075ff9a68f931df36e\"" Dec 13 01:30:18.323749 containerd[1711]: time="2024-12-13T01:30:18.323650338Z" level=info msg="StartContainer for \"f6b5cb708a9c36ae826fde007bc15e71d4e4f4c61d13b9075ff9a68f931df36e\"" Dec 13 01:30:18.366666 systemd[1]: Started cri-containerd-f6b5cb708a9c36ae826fde007bc15e71d4e4f4c61d13b9075ff9a68f931df36e.scope - libcontainer container f6b5cb708a9c36ae826fde007bc15e71d4e4f4c61d13b9075ff9a68f931df36e. Dec 13 01:30:18.401110 containerd[1711]: time="2024-12-13T01:30:18.400813695Z" level=info msg="StartContainer for \"f6b5cb708a9c36ae826fde007bc15e71d4e4f4c61d13b9075ff9a68f931df36e\" returns successfully" Dec 13 01:30:18.690086 containerd[1711]: time="2024-12-13T01:30:18.689970520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:18.693250 containerd[1711]: time="2024-12-13T01:30:18.693210166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:30:18.696418 containerd[1711]: time="2024-12-13T01:30:18.696386573Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:18.700482 containerd[1711]: time="2024-12-13T01:30:18.700436421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:18.701033 containerd[1711]: time="2024-12-13T01:30:18.700997862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.835632395s" Dec 13 01:30:18.701085 containerd[1711]: time="2024-12-13T01:30:18.701032662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:30:18.707332 containerd[1711]: time="2024-12-13T01:30:18.707182635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:18.714365 containerd[1711]: time="2024-12-13T01:30:18.714318889Z" level=info msg="CreateContainer within sandbox \"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:30:18.750888 containerd[1711]: time="2024-12-13T01:30:18.750842283Z" level=info msg="CreateContainer within sandbox \"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9\"" Dec 13 01:30:18.751622 containerd[1711]: time="2024-12-13T01:30:18.751393924Z" level=info msg="StartContainer for \"99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9\"" Dec 13 01:30:18.762227 systemd-networkd[1329]: cali1e2ecf75839: Gained IPv6LL Dec 13 01:30:18.781656 systemd[1]: Started cri-containerd-99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9.scope - libcontainer container 99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9. Dec 13 01:30:18.830985 containerd[1711]: time="2024-12-13T01:30:18.830940325Z" level=info msg="StartContainer for \"99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9\" returns successfully" Dec 13 01:30:19.110416 kubelet[3168]: I1213 01:30:19.109857 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-696589d6dc-8hhq2" podStartSLOduration=24.221945939 podStartE2EDuration="27.10984073s" podCreationTimestamp="2024-12-13 01:29:52 +0000 UTC" firstStartedPulling="2024-12-13 01:30:15.815607116 +0000 UTC m=+41.033005802" lastFinishedPulling="2024-12-13 01:30:18.703501867 +0000 UTC m=+43.920900593" observedRunningTime="2024-12-13 01:30:19.108545087 +0000 UTC m=+44.325943853" watchObservedRunningTime="2024-12-13 01:30:19.10984073 +0000 UTC m=+44.327239416" Dec 13 01:30:19.127683 kubelet[3168]: I1213 01:30:19.126695 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-clhzm" podStartSLOduration=38.126676244 podStartE2EDuration="38.126676244s" podCreationTimestamp="2024-12-13 01:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:19.125908242 +0000 UTC m=+44.343306968" watchObservedRunningTime="2024-12-13 01:30:19.126676244 +0000 UTC m=+44.344074970" Dec 13 01:30:20.091586 kubelet[3168]: I1213 01:30:20.091563 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:20.169670 systemd-networkd[1329]: cali315e384a0a1: Gained IPv6LL Dec 13 01:30:21.656555 containerd[1711]: time="2024-12-13T01:30:21.656487844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.658656 containerd[1711]: time="2024-12-13T01:30:21.658625489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:30:21.661144 containerd[1711]: time="2024-12-13T01:30:21.661108894Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.666827 containerd[1711]: time="2024-12-13T01:30:21.666769385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.667625 containerd[1711]: time="2024-12-13T01:30:21.667474546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.960262631s" Dec 13 01:30:21.667625 containerd[1711]: time="2024-12-13T01:30:21.667526227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:30:21.669946 containerd[1711]: time="2024-12-13T01:30:21.669893351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:30:21.672700 containerd[1711]: time="2024-12-13T01:30:21.672661477Z" level=info msg="CreateContainer within sandbox \"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:21.708324 containerd[1711]: time="2024-12-13T01:30:21.708270749Z" level=info msg="CreateContainer within sandbox \"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"db115139e84e33a088be6e75cff1217d23606b02abc0e4332bfcc4c56ac3ab7d\"" Dec 13 01:30:21.709785 containerd[1711]: time="2024-12-13T01:30:21.709719872Z" level=info msg="StartContainer for \"db115139e84e33a088be6e75cff1217d23606b02abc0e4332bfcc4c56ac3ab7d\"" Dec 13 01:30:21.739642 systemd[1]: Started cri-containerd-db115139e84e33a088be6e75cff1217d23606b02abc0e4332bfcc4c56ac3ab7d.scope - libcontainer container db115139e84e33a088be6e75cff1217d23606b02abc0e4332bfcc4c56ac3ab7d. Dec 13 01:30:21.772457 containerd[1711]: time="2024-12-13T01:30:21.772135598Z" level=info msg="StartContainer for \"db115139e84e33a088be6e75cff1217d23606b02abc0e4332bfcc4c56ac3ab7d\" returns successfully" Dec 13 01:30:23.108443 kubelet[3168]: I1213 01:30:23.108396 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:23.863145 kubelet[3168]: I1213 01:30:23.863103 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:23.946510 kubelet[3168]: I1213 01:30:23.946394 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-658975bcf4-wgcm5" podStartSLOduration=27.14980283 podStartE2EDuration="32.946377658s" podCreationTimestamp="2024-12-13 01:29:51 +0000 UTC" firstStartedPulling="2024-12-13 01:30:15.873100203 +0000 UTC m=+41.090498929" lastFinishedPulling="2024-12-13 01:30:21.669675031 +0000 UTC m=+46.887073757" observedRunningTime="2024-12-13 01:30:22.135682174 +0000 UTC m=+47.353080900" watchObservedRunningTime="2024-12-13 01:30:23.946377658 +0000 UTC m=+49.163776424" Dec 13 01:30:24.474315 containerd[1711]: time="2024-12-13T01:30:24.473630533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:24.475893 containerd[1711]: time="2024-12-13T01:30:24.475861617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:30:24.481123 containerd[1711]: time="2024-12-13T01:30:24.481050587Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:24.486323 containerd[1711]: time="2024-12-13T01:30:24.486262277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:24.487182 containerd[1711]: time="2024-12-13T01:30:24.486854198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.816921967s" Dec 13 01:30:24.487182 containerd[1711]: time="2024-12-13T01:30:24.486886599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:30:24.488099 containerd[1711]: time="2024-12-13T01:30:24.488015401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:24.489514 containerd[1711]: time="2024-12-13T01:30:24.489281563Z" level=info msg="CreateContainer within sandbox \"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:30:24.538658 containerd[1711]: time="2024-12-13T01:30:24.538574978Z" level=info msg="CreateContainer within sandbox \"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f6f846f91513c60a7765c2484d19099af56063fe5a39a9c057e44c0479079885\"" Dec 13 01:30:24.540575 containerd[1711]: time="2024-12-13T01:30:24.539149939Z" level=info msg="StartContainer for \"f6f846f91513c60a7765c2484d19099af56063fe5a39a9c057e44c0479079885\"" Dec 13 01:30:24.551229 kubelet[3168]: I1213 01:30:24.551184 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:24.576693 systemd[1]: Started cri-containerd-f6f846f91513c60a7765c2484d19099af56063fe5a39a9c057e44c0479079885.scope - libcontainer container f6f846f91513c60a7765c2484d19099af56063fe5a39a9c057e44c0479079885. Dec 13 01:30:24.638951 containerd[1711]: time="2024-12-13T01:30:24.638893971Z" level=info msg="StartContainer for \"f6f846f91513c60a7765c2484d19099af56063fe5a39a9c057e44c0479079885\" returns successfully" Dec 13 01:30:24.806545 containerd[1711]: time="2024-12-13T01:30:24.805666851Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:24.808522 containerd[1711]: time="2024-12-13T01:30:24.808065496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:30:24.812520 containerd[1711]: time="2024-12-13T01:30:24.810999382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 322.95026ms" Dec 13 01:30:24.812677 containerd[1711]: time="2024-12-13T01:30:24.812649865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:30:24.814955 containerd[1711]: time="2024-12-13T01:30:24.814915229Z" level=info msg="CreateContainer within sandbox \"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:24.850487 containerd[1711]: time="2024-12-13T01:30:24.850435377Z" level=info msg="CreateContainer within sandbox \"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41958549d843f4bf003b318d772592a2d4d9a5106f400c9fc4c4ee4aea938cde\"" Dec 13 01:30:24.852740 containerd[1711]: time="2024-12-13T01:30:24.852701942Z" level=info msg="StartContainer for \"41958549d843f4bf003b318d772592a2d4d9a5106f400c9fc4c4ee4aea938cde\"" Dec 13 01:30:24.879687 systemd[1]: Started cri-containerd-41958549d843f4bf003b318d772592a2d4d9a5106f400c9fc4c4ee4aea938cde.scope - libcontainer container 41958549d843f4bf003b318d772592a2d4d9a5106f400c9fc4c4ee4aea938cde. Dec 13 01:30:24.929395 containerd[1711]: time="2024-12-13T01:30:24.928826328Z" level=info msg="StartContainer for \"41958549d843f4bf003b318d772592a2d4d9a5106f400c9fc4c4ee4aea938cde\" returns successfully" Dec 13 01:30:24.996318 kubelet[3168]: I1213 01:30:24.996276 3168 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:30:24.996318 kubelet[3168]: I1213 01:30:24.996316 3168 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:30:25.150720 kubelet[3168]: I1213 01:30:25.150603 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-658975bcf4-zrhvc" podStartSLOduration=26.617835436 podStartE2EDuration="34.150477234s" podCreationTimestamp="2024-12-13 01:29:51 +0000 UTC" firstStartedPulling="2024-12-13 01:30:17.280867828 +0000 UTC m=+42.498266554" lastFinishedPulling="2024-12-13 01:30:24.813509626 +0000 UTC m=+50.030908352" observedRunningTime="2024-12-13 01:30:25.133900722 +0000 UTC m=+50.351299448" watchObservedRunningTime="2024-12-13 01:30:25.150477234 +0000 UTC m=+50.367875960" Dec 13 01:30:25.151075 kubelet[3168]: I1213 01:30:25.151030 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9kq89" podStartSLOduration=24.324796849 podStartE2EDuration="33.151019275s" podCreationTimestamp="2024-12-13 01:29:52 +0000 UTC" firstStartedPulling="2024-12-13 01:30:15.661585774 +0000 UTC m=+40.878984500" lastFinishedPulling="2024-12-13 01:30:24.4878082 +0000 UTC m=+49.705206926" observedRunningTime="2024-12-13 01:30:25.150128233 +0000 UTC m=+50.367526959" watchObservedRunningTime="2024-12-13 01:30:25.151019275 +0000 UTC m=+50.368418041" Dec 13 01:30:26.122630 kubelet[3168]: I1213 01:30:26.122285 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:33.656998 systemd[1]: run-containerd-runc-k8s.io-65a7333b2cf4df78f8071bc3357a0121c1a26357b50eb33b5e7700c347f10160-runc.a4zFum.mount: Deactivated successfully. Dec 13 01:30:34.899136 containerd[1711]: time="2024-12-13T01:30:34.899092070Z" level=info msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.934 [WARNING][5453] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987f675-3896-4490-b719-7c769af12cf2", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637", Pod:"csi-node-driver-9kq89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86486306892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.934 [INFO][5453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.934 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" iface="eth0" netns="" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.934 [INFO][5453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.934 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.952 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.953 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.953 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.960 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.960 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.962 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:34.965032 containerd[1711]: 2024-12-13 01:30:34.963 [INFO][5453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:34.966272 containerd[1711]: time="2024-12-13T01:30:34.965073412Z" level=info msg="TearDown network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" successfully" Dec 13 01:30:34.966272 containerd[1711]: time="2024-12-13T01:30:34.965096932Z" level=info msg="StopPodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" returns successfully" Dec 13 01:30:34.966272 containerd[1711]: time="2024-12-13T01:30:34.965850853Z" level=info msg="RemovePodSandbox for \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" Dec 13 01:30:34.966272 containerd[1711]: time="2024-12-13T01:30:34.965878693Z" level=info msg="Forcibly stopping sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\"" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.010 [WARNING][5477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987f675-3896-4490-b719-7c769af12cf2", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"43332e2f9d72f43eafc10f233cc6f31aff8cf70b328a237c9317873a0957e637", Pod:"csi-node-driver-9kq89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86486306892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.010 [INFO][5477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.010 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" iface="eth0" netns="" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.010 [INFO][5477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.011 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.029 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.030 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.030 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.038 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.038 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" HandleID="k8s-pod-network.d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Workload="ci--4081.2.1--a--ab3ee36414-k8s-csi--node--driver--9kq89-eth0" Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.040 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.043173 containerd[1711]: 2024-12-13 01:30:35.041 [INFO][5477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13" Dec 13 01:30:35.043173 containerd[1711]: time="2024-12-13T01:30:35.043078979Z" level=info msg="TearDown network for sandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" successfully" Dec 13 01:30:35.051927 containerd[1711]: time="2024-12-13T01:30:35.051650918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.051927 containerd[1711]: time="2024-12-13T01:30:35.051727118Z" level=info msg="RemovePodSandbox \"d46fbeed4ca4aee217b45c252d5f5c5d1df48cb36ab9f5c5b7a2f9283eb8eb13\" returns successfully" Dec 13 01:30:35.052617 containerd[1711]: time="2024-12-13T01:30:35.052335159Z" level=info msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.098 [WARNING][5502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0", GenerateName:"calico-kube-controllers-696589d6dc-", Namespace:"calico-system", SelfLink:"", UID:"5c45531f-b3b4-4928-a6d3-7b32cfab7875", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696589d6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac", Pod:"calico-kube-controllers-696589d6dc-8hhq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791c15622ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.098 [INFO][5502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.098 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" iface="eth0" netns="" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.098 [INFO][5502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.098 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.119 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.119 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.119 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.128 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.128 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.129 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.132594 containerd[1711]: 2024-12-13 01:30:35.131 [INFO][5502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.133864 containerd[1711]: time="2024-12-13T01:30:35.133107293Z" level=info msg="TearDown network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" successfully" Dec 13 01:30:35.133864 containerd[1711]: time="2024-12-13T01:30:35.133149293Z" level=info msg="StopPodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" returns successfully" Dec 13 01:30:35.133864 containerd[1711]: time="2024-12-13T01:30:35.133595734Z" level=info msg="RemovePodSandbox for \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" Dec 13 01:30:35.133864 containerd[1711]: time="2024-12-13T01:30:35.133621174Z" level=info msg="Forcibly stopping sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\"" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.170 [WARNING][5528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0", GenerateName:"calico-kube-controllers-696589d6dc-", Namespace:"calico-system", SelfLink:"", UID:"5c45531f-b3b4-4928-a6d3-7b32cfab7875", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696589d6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"082a9fe09a5278f0d2e4fefe630d5979778fdc4de463147a814e51d4dcab8fac", Pod:"calico-kube-controllers-696589d6dc-8hhq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791c15622ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.170 [INFO][5528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.170 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" iface="eth0" netns="" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.170 [INFO][5528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.170 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.191 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.191 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.191 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.199 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.199 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" HandleID="k8s-pod-network.e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--kube--controllers--696589d6dc--8hhq2-eth0" Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.200 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.204175 containerd[1711]: 2024-12-13 01:30:35.201 [INFO][5528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9" Dec 13 01:30:35.204739 containerd[1711]: time="2024-12-13T01:30:35.204593007Z" level=info msg="TearDown network for sandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" successfully" Dec 13 01:30:35.217156 containerd[1711]: time="2024-12-13T01:30:35.217114154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.217240 containerd[1711]: time="2024-12-13T01:30:35.217191554Z" level=info msg="RemovePodSandbox \"e27f5df28fb36145643f8c39c55f712c38d62c0161d9e70120237f819a1a2fe9\" returns successfully" Dec 13 01:30:35.217920 containerd[1711]: time="2024-12-13T01:30:35.217764915Z" level=info msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.250 [WARNING][5553] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f7e2e0-1463-4f9b-9827-a422072878d0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2", Pod:"calico-apiserver-658975bcf4-wgcm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaaf23f112a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.250 [INFO][5553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.250 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" iface="eth0" netns="" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.250 [INFO][5553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.251 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.270 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.270 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.270 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.280 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.280 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.282 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.284886 containerd[1711]: 2024-12-13 01:30:35.283 [INFO][5553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.284886 containerd[1711]: time="2024-12-13T01:30:35.284765419Z" level=info msg="TearDown network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" successfully" Dec 13 01:30:35.284886 containerd[1711]: time="2024-12-13T01:30:35.284789539Z" level=info msg="StopPodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" returns successfully" Dec 13 01:30:35.286114 containerd[1711]: time="2024-12-13T01:30:35.285835741Z" level=info msg="RemovePodSandbox for \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" Dec 13 01:30:35.286114 containerd[1711]: time="2024-12-13T01:30:35.285865661Z" level=info msg="Forcibly stopping sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\"" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.322 [WARNING][5577] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f7e2e0-1463-4f9b-9827-a422072878d0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"b429071cc6647aa9f4c4b73975c40ecbb9d906a47e2338a43abd63c9c3ec1fe2", Pod:"calico-apiserver-658975bcf4-wgcm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaaf23f112a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.323 [INFO][5577] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.323 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" iface="eth0" netns="" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.323 [INFO][5577] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.323 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.342 [INFO][5583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.342 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.342 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.351 [WARNING][5583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.351 [INFO][5583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" HandleID="k8s-pod-network.c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--wgcm5-eth0" Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.352 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.355406 containerd[1711]: 2024-12-13 01:30:35.354 [INFO][5577] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f" Dec 13 01:30:35.355858 containerd[1711]: time="2024-12-13T01:30:35.355441371Z" level=info msg="TearDown network for sandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" successfully" Dec 13 01:30:35.408012 containerd[1711]: time="2024-12-13T01:30:35.407567963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.408012 containerd[1711]: time="2024-12-13T01:30:35.407649883Z" level=info msg="RemovePodSandbox \"c84991ed83e91f2475cdd5f04cf5c35786e58afe230377c0a3fa3959405a2a4f\" returns successfully" Dec 13 01:30:35.408338 containerd[1711]: time="2024-12-13T01:30:35.408309925Z" level=info msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.451 [WARNING][5601] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17", Pod:"calico-apiserver-658975bcf4-zrhvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2ecf75839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.451 [INFO][5601] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.451 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" iface="eth0" netns="" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.451 [INFO][5601] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.451 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.469 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.470 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.470 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.479 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.479 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.480 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.484093 containerd[1711]: 2024-12-13 01:30:35.482 [INFO][5601] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.484093 containerd[1711]: time="2024-12-13T01:30:35.483907287Z" level=info msg="TearDown network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" successfully" Dec 13 01:30:35.484093 containerd[1711]: time="2024-12-13T01:30:35.483931767Z" level=info msg="StopPodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" returns successfully" Dec 13 01:30:35.485624 containerd[1711]: time="2024-12-13T01:30:35.484652529Z" level=info msg="RemovePodSandbox for \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" Dec 13 01:30:35.485624 containerd[1711]: time="2024-12-13T01:30:35.484687609Z" level=info msg="Forcibly stopping sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\"" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.521 [WARNING][5625] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0", GenerateName:"calico-apiserver-658975bcf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5e3c1dd-da7e-42cd-b3e9-c3b3953719af", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658975bcf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"51b199efc45a2abe532e7312ef9d44f143b2e52c8116bf4cd96f6b00ecff8f17", Pod:"calico-apiserver-658975bcf4-zrhvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2ecf75839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.521 [INFO][5625] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.521 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" iface="eth0" netns="" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.521 [INFO][5625] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.521 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.540 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.540 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.540 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.548 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.548 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" HandleID="k8s-pod-network.720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-calico--apiserver--658975bcf4--zrhvc-eth0" Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.549 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.552668 containerd[1711]: 2024-12-13 01:30:35.551 [INFO][5625] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b" Dec 13 01:30:35.552668 containerd[1711]: time="2024-12-13T01:30:35.552566515Z" level=info msg="TearDown network for sandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" successfully" Dec 13 01:30:35.559998 containerd[1711]: time="2024-12-13T01:30:35.559944531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.560147 containerd[1711]: time="2024-12-13T01:30:35.560013611Z" level=info msg="RemovePodSandbox \"720bc50c2505dc299531de734460f6bee43c843778d8dd99bf3b5aa898531d8b\" returns successfully" Dec 13 01:30:35.560580 containerd[1711]: time="2024-12-13T01:30:35.560530972Z" level=info msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.590 [WARNING][5649] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d3ab96e8-d944-493f-9479-dccde4369fe1", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0", Pod:"coredns-6f6b679f8f-pvjm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56b84286dd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.591 [INFO][5649] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.591 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" iface="eth0" netns="" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.591 [INFO][5649] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.591 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.609 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.609 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.609 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.618 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.618 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.619 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.621771 containerd[1711]: 2024-12-13 01:30:35.620 [INFO][5649] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.622523 containerd[1711]: time="2024-12-13T01:30:35.622206544Z" level=info msg="TearDown network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" successfully" Dec 13 01:30:35.622523 containerd[1711]: time="2024-12-13T01:30:35.622235864Z" level=info msg="StopPodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" returns successfully" Dec 13 01:30:35.622984 containerd[1711]: time="2024-12-13T01:30:35.622949626Z" level=info msg="RemovePodSandbox for \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" Dec 13 01:30:35.623044 containerd[1711]: time="2024-12-13T01:30:35.622996666Z" level=info msg="Forcibly stopping sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\"" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.659 [WARNING][5673] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d3ab96e8-d944-493f-9479-dccde4369fe1", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"022021ec7a340f0549ac17bda25f69e93d9ee296c0aaafb495e489c756d75bc0", Pod:"coredns-6f6b679f8f-pvjm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56b84286dd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.659 [INFO][5673] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.659 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" iface="eth0" netns="" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.659 [INFO][5673] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.659 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.678 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.679 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.679 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.687 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.687 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" HandleID="k8s-pod-network.ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--pvjm5-eth0" Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.688 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.691487 containerd[1711]: 2024-12-13 01:30:35.690 [INFO][5673] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b" Dec 13 01:30:35.692444 containerd[1711]: time="2024-12-13T01:30:35.691944614Z" level=info msg="TearDown network for sandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" successfully" Dec 13 01:30:35.698538 containerd[1711]: time="2024-12-13T01:30:35.698508468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.698746 containerd[1711]: time="2024-12-13T01:30:35.698647549Z" level=info msg="RemovePodSandbox \"ae684fde330d55b54e6968bd1bc6944ed446bdf6daef97033ff2103d0d967c2b\" returns successfully" Dec 13 01:30:35.699152 containerd[1711]: time="2024-12-13T01:30:35.699125710Z" level=info msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.734 [WARNING][5697] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"820af745-97f5-43f0-a795-d962b8d83e56", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e", Pod:"coredns-6f6b679f8f-clhzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali315e384a0a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.735 [INFO][5697] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.735 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" iface="eth0" netns="" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.735 [INFO][5697] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.735 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.753 [INFO][5703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.753 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.753 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.763 [WARNING][5703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.763 [INFO][5703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.764 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.767030 containerd[1711]: 2024-12-13 01:30:35.765 [INFO][5697] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.767030 containerd[1711]: time="2024-12-13T01:30:35.766910575Z" level=info msg="TearDown network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" successfully" Dec 13 01:30:35.767030 containerd[1711]: time="2024-12-13T01:30:35.766933975Z" level=info msg="StopPodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" returns successfully" Dec 13 01:30:35.768895 containerd[1711]: time="2024-12-13T01:30:35.767609577Z" level=info msg="RemovePodSandbox for \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" Dec 13 01:30:35.768895 containerd[1711]: time="2024-12-13T01:30:35.767641337Z" level=info msg="Forcibly stopping sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\"" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.803 [WARNING][5721] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"820af745-97f5-43f0-a795-d962b8d83e56", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-ab3ee36414", ContainerID:"cf9a06fdfc4a3ac566383fe10eb4de073d57dcbf8c1ab87d4a5bd841890e9d3e", Pod:"coredns-6f6b679f8f-clhzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali315e384a0a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.803 [INFO][5721] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.803 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" iface="eth0" netns="" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.803 [INFO][5721] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.803 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.820 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.820 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.820 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.829 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.829 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" HandleID="k8s-pod-network.c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Workload="ci--4081.2.1--a--ab3ee36414-k8s-coredns--6f6b679f8f--clhzm-eth0" Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.830 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:35.833739 containerd[1711]: 2024-12-13 01:30:35.832 [INFO][5721] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d" Dec 13 01:30:35.833739 containerd[1711]: time="2024-12-13T01:30:35.833632439Z" level=info msg="TearDown network for sandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" successfully" Dec 13 01:30:35.841146 containerd[1711]: time="2024-12-13T01:30:35.841103415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:35.841209 containerd[1711]: time="2024-12-13T01:30:35.841166055Z" level=info msg="RemovePodSandbox \"c44b3e511bb9f44cdb170d42ee189f85dcf8c2ceea003cd41cfe08cf7d813e8d\" returns successfully" Dec 13 01:30:38.627630 kubelet[3168]: I1213 01:30:38.627592 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:31:00.774538 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:48236.service - OpenSSH per-connection server daemon (10.200.16.10:48236). Dec 13 01:31:01.202719 sshd[5770]: Accepted publickey for core from 10.200.16.10 port 48236 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:01.204892 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:01.208631 systemd-logind[1680]: New session 10 of user core. Dec 13 01:31:01.213617 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:31:01.573697 sshd[5770]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:01.577257 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:48236.service: Deactivated successfully. Dec 13 01:31:01.580067 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:31:01.581255 systemd-logind[1680]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:31:01.582370 systemd-logind[1680]: Removed session 10. Dec 13 01:31:01.695591 systemd[1]: run-containerd-runc-k8s.io-99718313e2ffc97d6ca77e09b13fd8dc37c8dc2680fa4b2b25535c28774988e9-runc.eLahtg.mount: Deactivated successfully. Dec 13 01:31:06.656729 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:48250.service - OpenSSH per-connection server daemon (10.200.16.10:48250). Dec 13 01:31:07.079859 sshd[5826]: Accepted publickey for core from 10.200.16.10 port 48250 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:07.081179 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:07.086445 systemd-logind[1680]: New session 11 of user core. Dec 13 01:31:07.091648 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:31:07.463722 sshd[5826]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:07.468157 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:48250.service: Deactivated successfully. Dec 13 01:31:07.474040 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:31:07.474730 systemd-logind[1680]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:31:07.475754 systemd-logind[1680]: Removed session 11. Dec 13 01:31:12.540108 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:40806.service - OpenSSH per-connection server daemon (10.200.16.10:40806). Dec 13 01:31:12.963054 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 40806 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:12.964344 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:12.968672 systemd-logind[1680]: New session 12 of user core. Dec 13 01:31:12.972651 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:31:13.335343 sshd[5842]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:13.338683 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:40806.service: Deactivated successfully. Dec 13 01:31:13.340755 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:31:13.342103 systemd-logind[1680]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:31:13.343254 systemd-logind[1680]: Removed session 12. Dec 13 01:31:13.413742 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:40808.service - OpenSSH per-connection server daemon (10.200.16.10:40808). Dec 13 01:31:13.829134 sshd[5856]: Accepted publickey for core from 10.200.16.10 port 40808 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:13.830420 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:13.834532 systemd-logind[1680]: New session 13 of user core. Dec 13 01:31:13.842641 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:31:14.228849 sshd[5856]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:14.232027 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:40808.service: Deactivated successfully. Dec 13 01:31:14.233760 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:31:14.234414 systemd-logind[1680]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:31:14.235309 systemd-logind[1680]: Removed session 13. Dec 13 01:31:14.311722 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:40818.service - OpenSSH per-connection server daemon (10.200.16.10:40818). Dec 13 01:31:14.728609 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 40818 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:14.729892 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:14.733704 systemd-logind[1680]: New session 14 of user core. Dec 13 01:31:14.738627 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:31:15.098455 sshd[5867]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:15.101896 systemd-logind[1680]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:31:15.102712 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:40818.service: Deactivated successfully. Dec 13 01:31:15.105139 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:31:15.106150 systemd-logind[1680]: Removed session 14. Dec 13 01:31:20.174392 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:45366.service - OpenSSH per-connection server daemon (10.200.16.10:45366). Dec 13 01:31:20.586630 sshd[5885]: Accepted publickey for core from 10.200.16.10 port 45366 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:20.587893 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:20.592555 systemd-logind[1680]: New session 15 of user core. Dec 13 01:31:20.598695 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:31:20.965913 sshd[5885]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:20.969091 systemd-logind[1680]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:31:20.969825 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:45366.service: Deactivated successfully. Dec 13 01:31:20.971867 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:31:20.972908 systemd-logind[1680]: Removed session 15. Dec 13 01:31:26.048606 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:45368.service - OpenSSH per-connection server daemon (10.200.16.10:45368). Dec 13 01:31:26.468827 sshd[5916]: Accepted publickey for core from 10.200.16.10 port 45368 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:26.470131 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:26.474256 systemd-logind[1680]: New session 16 of user core. Dec 13 01:31:26.481705 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:31:26.836958 sshd[5916]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:26.840683 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:45368.service: Deactivated successfully. Dec 13 01:31:26.842337 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:31:26.843832 systemd-logind[1680]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:31:26.844875 systemd-logind[1680]: Removed session 16. Dec 13 01:31:31.916660 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:44462.service - OpenSSH per-connection server daemon (10.200.16.10:44462). Dec 13 01:31:32.338838 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 44462 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:32.340201 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:32.344062 systemd-logind[1680]: New session 17 of user core. Dec 13 01:31:32.351680 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:32.702716 sshd[5928]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:32.705961 systemd-logind[1680]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:32.706614 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:44462.service: Deactivated successfully. Dec 13 01:31:32.708530 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:32.709700 systemd-logind[1680]: Removed session 17. Dec 13 01:31:37.777572 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:44478.service - OpenSSH per-connection server daemon (10.200.16.10:44478). Dec 13 01:31:38.194251 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 44478 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:38.195624 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:38.199839 systemd-logind[1680]: New session 18 of user core. Dec 13 01:31:38.207705 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:38.571736 sshd[5969]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:38.575990 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:44478.service: Deactivated successfully. Dec 13 01:31:38.577958 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:38.579462 systemd-logind[1680]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:38.580454 systemd-logind[1680]: Removed session 18. Dec 13 01:31:38.649227 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:37504.service - OpenSSH per-connection server daemon (10.200.16.10:37504). Dec 13 01:31:39.075638 sshd[5984]: Accepted publickey for core from 10.200.16.10 port 37504 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:39.077125 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:39.082375 systemd-logind[1680]: New session 19 of user core. Dec 13 01:31:39.088334 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:39.556742 sshd[5984]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:39.561691 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:37504.service: Deactivated successfully. Dec 13 01:31:39.564954 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:39.567126 systemd-logind[1680]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:39.568522 systemd-logind[1680]: Removed session 19. Dec 13 01:31:39.638866 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:37518.service - OpenSSH per-connection server daemon (10.200.16.10:37518). Dec 13 01:31:40.071295 sshd[5995]: Accepted publickey for core from 10.200.16.10 port 37518 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:40.072913 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:40.077469 systemd-logind[1680]: New session 20 of user core. Dec 13 01:31:40.085696 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:42.040617 sshd[5995]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:42.046085 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:37518.service: Deactivated successfully. Dec 13 01:31:42.048861 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:42.050428 systemd-logind[1680]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:42.052474 systemd-logind[1680]: Removed session 20. Dec 13 01:31:42.115789 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:37522.service - OpenSSH per-connection server daemon (10.200.16.10:37522). Dec 13 01:31:42.525021 sshd[6013]: Accepted publickey for core from 10.200.16.10 port 37522 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:42.525567 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:42.529429 systemd-logind[1680]: New session 21 of user core. Dec 13 01:31:42.536933 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:43.010735 sshd[6013]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:43.015033 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:37522.service: Deactivated successfully. Dec 13 01:31:43.018217 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:43.019112 systemd-logind[1680]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:43.020168 systemd-logind[1680]: Removed session 21. Dec 13 01:31:43.089959 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:37536.service - OpenSSH per-connection server daemon (10.200.16.10:37536). Dec 13 01:31:43.523899 sshd[6026]: Accepted publickey for core from 10.200.16.10 port 37536 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:43.525303 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:43.530364 systemd-logind[1680]: New session 22 of user core. Dec 13 01:31:43.535656 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:43.891893 sshd[6026]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:43.896317 systemd-logind[1680]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:43.897350 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:37536.service: Deactivated successfully. Dec 13 01:31:43.900864 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:43.902283 systemd-logind[1680]: Removed session 22. Dec 13 01:31:48.982767 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:51982.service - OpenSSH per-connection server daemon (10.200.16.10:51982). Dec 13 01:31:49.388310 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 51982 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:49.390118 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:49.395224 systemd-logind[1680]: New session 23 of user core. Dec 13 01:31:49.401660 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:49.745468 sshd[6056]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:49.748428 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:49.749035 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:51982.service: Deactivated successfully. Dec 13 01:31:49.752438 systemd-logind[1680]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:49.753719 systemd-logind[1680]: Removed session 23. Dec 13 01:31:54.832731 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:51988.service - OpenSSH per-connection server daemon (10.200.16.10:51988). Dec 13 01:31:55.237122 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 51988 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:55.238413 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:55.242387 systemd-logind[1680]: New session 24 of user core. Dec 13 01:31:55.247637 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:31:55.612477 sshd[6089]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:55.615732 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:51988.service: Deactivated successfully. Dec 13 01:31:55.617348 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:31:55.618481 systemd-logind[1680]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:31:55.619450 systemd-logind[1680]: Removed session 24. Dec 13 01:32:00.688121 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:52794.service - OpenSSH per-connection server daemon (10.200.16.10:52794). Dec 13 01:32:01.101119 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 52794 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:01.102538 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:01.109138 systemd-logind[1680]: New session 25 of user core. Dec 13 01:32:01.115661 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:32:01.479757 sshd[6102]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:01.482428 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:52794.service: Deactivated successfully. Dec 13 01:32:01.485927 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:32:01.487453 systemd-logind[1680]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:32:01.489130 systemd-logind[1680]: Removed session 25. Dec 13 01:32:06.564780 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:52798.service - OpenSSH per-connection server daemon (10.200.16.10:52798). Dec 13 01:32:06.988060 sshd[6156]: Accepted publickey for core from 10.200.16.10 port 52798 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:06.989378 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:06.992957 systemd-logind[1680]: New session 26 of user core. Dec 13 01:32:07.000712 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:32:07.360627 sshd[6156]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:07.364113 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:52798.service: Deactivated successfully. Dec 13 01:32:07.365710 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:32:07.366530 systemd-logind[1680]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:32:07.367482 systemd-logind[1680]: Removed session 26. Dec 13 01:32:12.443821 systemd[1]: Started sshd@24-10.200.20.18:22-10.200.16.10:60258.service - OpenSSH per-connection server daemon (10.200.16.10:60258). Dec 13 01:32:12.860200 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 60258 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:12.861585 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.865928 systemd-logind[1680]: New session 27 of user core. Dec 13 01:32:12.869667 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:32:13.220717 sshd[6170]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:13.223780 systemd[1]: sshd@24-10.200.20.18:22-10.200.16.10:60258.service: Deactivated successfully. Dec 13 01:32:13.226003 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:32:13.227236 systemd-logind[1680]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:32:13.228092 systemd-logind[1680]: Removed session 27. Dec 13 01:32:18.303058 systemd[1]: Started sshd@25-10.200.20.18:22-10.200.16.10:60260.service - OpenSSH per-connection server daemon (10.200.16.10:60260). Dec 13 01:32:18.740894 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 60260 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:18.742650 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:18.746422 systemd-logind[1680]: New session 28 of user core. Dec 13 01:32:18.756689 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:32:19.112594 sshd[6184]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:19.116310 systemd[1]: sshd@25-10.200.20.18:22-10.200.16.10:60260.service: Deactivated successfully. Dec 13 01:32:19.118863 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:32:19.120005 systemd-logind[1680]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:32:19.121000 systemd-logind[1680]: Removed session 28.