Jan 15 12:50:36.303231 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:50:36.303254 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:50:36.303262 kernel: KASLR enabled Jan 15 12:50:36.303268 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:50:36.303275 kernel: printk: bootconsole [pl11] enabled Jan 15 12:50:36.303281 kernel: efi: EFI v2.7 by EDK II Jan 15 12:50:36.303288 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:50:36.303294 kernel: random: crng init done Jan 15 12:50:36.303300 kernel: ACPI: Early table checksum verification disabled Jan 15 12:50:36.303306 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:50:36.303312 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303318 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303325 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:50:36.303331 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303339 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303345 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303352 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303360 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303366 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303372 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:50:36.303379 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303385 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:50:36.303392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:50:36.303399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:50:36.303405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:50:36.303411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:50:36.303418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:50:36.303424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:50:36.303432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:50:36.303438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:50:36.303444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:50:36.303451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:50:36.303457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:50:36.303464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:50:36.303470 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:50:36.303484 kernel: Zone ranges: Jan 15 12:50:36.303491 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:50:36.303497 kernel: DMA32 empty Jan 15 12:50:36.303504 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:50:36.303511 kernel: Movable zone start for each node Jan 15 12:50:36.303521 kernel: Early memory node ranges Jan 15 12:50:36.303528 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:50:36.303535 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:50:36.303541 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:50:36.303548 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:50:36.303556 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:50:36.303563 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:50:36.303570 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:50:36.303577 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:50:36.303583 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:50:36.303590 kernel: psci: probing for conduit method from ACPI. Jan 15 12:50:36.303597 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:50:36.303604 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:50:36.303610 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:50:36.303617 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:50:36.303624 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:50:36.303630 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:50:36.305388 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:50:36.305405 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:50:36.305412 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:50:36.305419 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:50:36.305426 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:50:36.305433 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:50:36.305440 kernel: CPU features: detected: Spectre-BHB Jan 15 12:50:36.305447 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:50:36.305454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:50:36.305461 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:50:36.305468 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:50:36.305480 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:50:36.305487 kernel: alternatives: applying boot alternatives Jan 15 12:50:36.305495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:50:36.305503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:50:36.305510 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:50:36.305517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:50:36.305523 kernel: Fallback order for Node 0: 0 Jan 15 12:50:36.305530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:50:36.305537 kernel: Policy zone: Normal Jan 15 12:50:36.305543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:50:36.305550 kernel: software IO TLB: area num 2. Jan 15 12:50:36.305559 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:50:36.305566 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:50:36.305573 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:50:36.305580 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:50:36.305587 kernel: rcu: RCU event tracing is enabled. Jan 15 12:50:36.305594 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:50:36.305601 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:50:36.305607 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:50:36.305614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:50:36.305621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:50:36.305628 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:50:36.305650 kernel: GICv3: 960 SPIs implemented Jan 15 12:50:36.305659 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:50:36.305665 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:50:36.305672 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:50:36.305679 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:50:36.305685 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:50:36.305692 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:50:36.305699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:50:36.305706 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:50:36.305713 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:50:36.305720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:50:36.305728 kernel: Console: colour dummy device 80x25 Jan 15 12:50:36.305736 kernel: printk: console [tty1] enabled Jan 15 12:50:36.305743 kernel: ACPI: Core revision 20230628 Jan 15 12:50:36.305750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:50:36.305757 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:50:36.305764 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:50:36.305771 kernel: landlock: Up and running. Jan 15 12:50:36.305777 kernel: SELinux: Initializing. Jan 15 12:50:36.305784 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.305792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.305800 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:50:36.305807 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:50:36.305815 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:50:36.305822 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:50:36.305828 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:50:36.305835 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:50:36.305842 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:50:36.305856 kernel: Remapping and enabling EFI services. Jan 15 12:50:36.305863 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:50:36.305870 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:50:36.305877 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:50:36.305886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:50:36.305893 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:50:36.305901 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:50:36.305908 kernel: SMP: Total of 2 processors activated. Jan 15 12:50:36.305915 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:50:36.305924 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:50:36.305932 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:50:36.305939 kernel: CPU features: detected: CRC32 instructions Jan 15 12:50:36.305946 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:50:36.305954 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:50:36.305961 kernel: CPU features: detected: Privileged Access Never Jan 15 12:50:36.305968 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:50:36.305975 kernel: alternatives: applying system-wide alternatives Jan 15 12:50:36.305983 kernel: devtmpfs: initialized Jan 15 12:50:36.305992 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:50:36.305999 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:50:36.306006 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:50:36.306014 kernel: SMBIOS 3.1.0 present. Jan 15 12:50:36.306021 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:50:36.306028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:50:36.306036 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:50:36.306043 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:50:36.306051 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:50:36.306060 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:50:36.306067 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:50:36.306074 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:50:36.306082 kernel: cpuidle: using governor menu Jan 15 12:50:36.306089 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:50:36.306096 kernel: ASID allocator initialised with 32768 entries Jan 15 12:50:36.306104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:50:36.306111 kernel: Serial: AMBA PL011 UART driver Jan 15 12:50:36.306118 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:50:36.306127 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:50:36.306134 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:50:36.306142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:50:36.306150 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:50:36.306157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:50:36.306164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:50:36.306172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:50:36.306179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:50:36.306186 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:50:36.306195 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:50:36.306202 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:50:36.306210 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:50:36.306217 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:50:36.306225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:50:36.306232 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:50:36.306239 kernel: ACPI: Interpreter enabled Jan 15 12:50:36.306247 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:50:36.306254 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:50:36.306263 kernel: printk: console [ttyAMA0] enabled Jan 15 12:50:36.306270 kernel: printk: bootconsole [pl11] disabled Jan 15 12:50:36.306278 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:50:36.306285 kernel: iommu: Default domain type: Translated Jan 15 12:50:36.306292 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:50:36.306299 kernel: efivars: Registered efivars operations Jan 15 12:50:36.306307 kernel: vgaarb: loaded Jan 15 12:50:36.306314 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:50:36.306321 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:50:36.306330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:50:36.306337 kernel: pnp: PnP ACPI init Jan 15 12:50:36.306345 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:50:36.306352 kernel: NET: Registered PF_INET protocol family Jan 15 12:50:36.306359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:50:36.306367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:50:36.306374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:50:36.306381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:50:36.306389 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:50:36.306397 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:50:36.306405 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.306413 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.306420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:50:36.306427 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:50:36.306434 kernel: kvm [1]: HYP mode not available Jan 15 12:50:36.306441 kernel: Initialise system trusted keyrings Jan 15 12:50:36.306449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:50:36.306456 kernel: Key type asymmetric registered Jan 15 12:50:36.306465 kernel: Asymmetric key parser 'x509' registered Jan 15 12:50:36.306472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:50:36.306479 kernel: io scheduler mq-deadline registered Jan 15 12:50:36.306487 kernel: io scheduler kyber registered Jan 15 12:50:36.306494 kernel: io scheduler bfq registered Jan 15 12:50:36.306501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:50:36.306508 kernel: thunder_xcv, ver 1.0 Jan 15 12:50:36.306515 kernel: thunder_bgx, ver 1.0 Jan 15 12:50:36.306523 kernel: nicpf, ver 1.0 Jan 15 12:50:36.306530 kernel: nicvf, ver 1.0 Jan 15 12:50:36.306691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:50:36.306767 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:50:35 UTC (1736945435) Jan 15 12:50:36.306778 kernel: efifb: probing for efifb Jan 15 12:50:36.306786 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:50:36.306793 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:50:36.306800 kernel: efifb: scrolling: redraw Jan 15 12:50:36.306808 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:50:36.306818 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:50:36.306825 kernel: fb0: EFI VGA frame buffer device Jan 15 12:50:36.306833 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:50:36.306840 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:50:36.306847 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:50:36.306855 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:50:36.306862 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:50:36.306869 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:50:36.306877 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:50:36.306886 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:50:36.306893 kernel: Segment Routing with IPv6 Jan 15 12:50:36.306901 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:50:36.306908 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:50:36.306915 kernel: Key type dns_resolver registered Jan 15 12:50:36.306923 kernel: registered taskstats version 1 Jan 15 12:50:36.306930 kernel: Loading compiled-in X.509 certificates Jan 15 12:50:36.306937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:50:36.306945 kernel: Key type .fscrypt registered Jan 15 12:50:36.306953 kernel: Key type fscrypt-provisioning registered Jan 15 12:50:36.306961 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:50:36.306968 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:50:36.306975 kernel: ima: No architecture policies found Jan 15 12:50:36.306983 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:50:36.306990 kernel: clk: Disabling unused clocks Jan 15 12:50:36.306997 kernel: Freeing unused kernel memory: 39360K Jan 15 12:50:36.307005 kernel: Run /init as init process Jan 15 12:50:36.307012 kernel: with arguments: Jan 15 12:50:36.307021 kernel: /init Jan 15 12:50:36.307028 kernel: with environment: Jan 15 12:50:36.307035 kernel: HOME=/ Jan 15 12:50:36.307042 kernel: TERM=linux Jan 15 12:50:36.307049 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:50:36.307059 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:50:36.307068 systemd[1]: Detected virtualization microsoft. Jan 15 12:50:36.307076 systemd[1]: Detected architecture arm64. Jan 15 12:50:36.307086 systemd[1]: Running in initrd. Jan 15 12:50:36.307093 systemd[1]: No hostname configured, using default hostname. Jan 15 12:50:36.307101 systemd[1]: Hostname set to . Jan 15 12:50:36.307109 systemd[1]: Initializing machine ID from random generator. Jan 15 12:50:36.307117 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:50:36.307125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:50:36.307133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:50:36.307141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:50:36.307151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:50:36.307159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:50:36.307167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:50:36.307177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:50:36.307185 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:50:36.307193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:50:36.307203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:50:36.307211 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:50:36.307219 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:50:36.307227 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:50:36.307235 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:50:36.307243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:50:36.307251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:50:36.307259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:50:36.307267 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:50:36.307276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:50:36.307284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:50:36.307292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:50:36.307300 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:50:36.307308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:50:36.307316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:50:36.307324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:50:36.307331 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:50:36.307339 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:50:36.307349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:50:36.307376 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:50:36.307396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:36.307404 systemd-journald[217]: Journal started Jan 15 12:50:36.307425 systemd-journald[217]: Runtime Journal (/run/log/journal/35789dc8cf94409bb65e3648a850b131) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:50:36.319128 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:50:36.351469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:50:36.351493 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:50:36.351514 kernel: Bridge firewalling registered Jan 15 12:50:36.351727 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:50:36.362717 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:50:36.368845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:50:36.381097 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:50:36.392655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:50:36.403659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:36.421888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:36.429823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:50:36.453819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:50:36.469551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:50:36.478989 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:36.500799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:50:36.507252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:50:36.530822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:50:36.539866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:50:36.557357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:50:36.576501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:50:36.591086 dracut-cmdline[249]: dracut-dracut-053 Jan 15 12:50:36.596264 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:50:36.627871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:50:36.659363 systemd-resolved[263]: Positive Trust Anchors: Jan 15 12:50:36.659379 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:50:36.659410 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:50:36.666055 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 15 12:50:36.667056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:50:36.681383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:50:36.744658 kernel: SCSI subsystem initialized Jan 15 12:50:36.751655 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:50:36.762681 kernel: iscsi: registered transport (tcp) Jan 15 12:50:36.779889 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:50:36.779932 kernel: QLogic iSCSI HBA Driver Jan 15 12:50:36.821146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:50:36.834781 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:50:36.857661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:50:36.857697 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:50:36.867071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:50:36.917665 kernel: raid6: neonx8 gen() 15783 MB/s Jan 15 12:50:36.937651 kernel: raid6: neonx4 gen() 15656 MB/s Jan 15 12:50:36.957651 kernel: raid6: neonx2 gen() 13215 MB/s Jan 15 12:50:36.978651 kernel: raid6: neonx1 gen() 10480 MB/s Jan 15 12:50:36.998653 kernel: raid6: int64x8 gen() 6958 MB/s Jan 15 12:50:37.018649 kernel: raid6: int64x4 gen() 7346 MB/s Jan 15 12:50:37.039655 kernel: raid6: int64x2 gen() 6118 MB/s Jan 15 12:50:37.062997 kernel: raid6: int64x1 gen() 5059 MB/s Jan 15 12:50:37.063009 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 15 12:50:37.086640 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Jan 15 12:50:37.086657 kernel: raid6: using neon recovery algorithm Jan 15 12:50:37.095655 kernel: xor: measuring software checksum speed Jan 15 12:50:37.102158 kernel: 8regs : 18223 MB/sec Jan 15 12:50:37.102170 kernel: 32regs : 19674 MB/sec Jan 15 12:50:37.105803 kernel: arm64_neon : 26954 MB/sec Jan 15 12:50:37.109781 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 15 12:50:37.160700 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:50:37.171874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:50:37.188814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:50:37.211537 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:50:37.214767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:50:37.240417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:50:37.257588 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 15 12:50:37.285869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:50:37.299947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:50:37.334992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:50:37.355292 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:50:37.379733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:50:37.393575 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:50:37.402354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:50:37.421808 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:50:37.442816 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:50:37.453967 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:50:37.480512 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:50:37.486259 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:50:37.486293 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:50:37.492098 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:50:37.508944 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:50:37.508968 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:50:37.508978 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:50:37.527328 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:50:37.534807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:50:37.547889 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:50:37.547914 kernel: scsi host1: storvsc_host_t Jan 15 12:50:37.540567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:37.587072 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:50:37.587118 kernel: scsi host0: storvsc_host_t Jan 15 12:50:37.587310 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:50:37.587331 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:50:37.565270 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:37.571621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:37.571795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.593702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.631684 kernel: PTP clock support registered Jan 15 12:50:37.632912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.602017 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:50:37.622651 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:50:37.622705 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:50:37.622714 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:50:37.622726 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:50:37.622734 systemd-journald[217]: Time jumped backwards, rotating. Jan 15 12:50:37.622776 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: VF slot 1 added Jan 15 12:50:37.597052 systemd-resolved[263]: Clock change detected. Flushing caches. Jan 15 12:50:37.607372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:37.609398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.657735 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:50:37.657782 kernel: hv_pci c8876571-935c-4c64-b13c-50562eed65b1: PCI VMBus probing: Using version 0x10004 Jan 15 12:50:37.768122 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 15 12:50:37.768285 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:50:37.768303 kernel: hv_pci c8876571-935c-4c64-b13c-50562eed65b1: PCI host bridge to bus 935c:00 Jan 15 12:50:37.768391 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:50:37.768479 kernel: pci_bus 935c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:50:37.768573 kernel: pci_bus 935c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:50:37.768650 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:50:37.768741 kernel: pci 935c:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:50:37.768835 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:50:37.768919 kernel: pci 935c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:50:37.769002 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 15 12:50:37.769083 kernel: pci 935c:00:02.0: enabling Extended Tags Jan 15 12:50:37.769165 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:50:37.769269 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:50:37.769354 kernel: pci 935c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 935c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:50:37.769437 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:37.769446 kernel: pci_bus 935c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:50:37.769521 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 15 12:50:37.769604 kernel: pci 935c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:50:37.658178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.722649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.761388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:37.806361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:37.835513 kernel: mlx5_core 935c:00:02.0: enabling device (0000 -> 0002) Jan 15 12:50:38.051377 kernel: mlx5_core 935c:00:02.0: firmware version: 16.30.1284 Jan 15 12:50:38.051514 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: VF registering: eth1 Jan 15 12:50:38.051608 kernel: mlx5_core 935c:00:02.0 eth1: joined to eth0 Jan 15 12:50:38.051700 kernel: mlx5_core 935c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:50:38.061213 kernel: mlx5_core 935c:00:02.0 enP37724s1: renamed from eth1 Jan 15 12:50:38.380897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:50:38.407220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (489) Jan 15 12:50:38.421934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:50:38.444218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:50:38.521264 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (515) Jan 15 12:50:38.534778 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:50:38.541401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:50:38.573449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:50:38.599237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:38.609215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:39.616255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:39.616897 disk-uuid[610]: The operation has completed successfully. Jan 15 12:50:39.674610 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:50:39.674704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:50:39.704340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:50:39.716325 sh[697]: Success Jan 15 12:50:39.747229 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:50:39.962664 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:50:39.974122 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:50:39.984337 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:50:40.019631 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:50:40.019706 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:40.026724 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:50:40.031765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:50:40.035953 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:50:40.357704 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:50:40.363001 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:50:40.382526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:50:40.395377 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:50:40.415549 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:40.415574 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:40.415592 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:40.438312 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:40.446788 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:50:40.460215 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:40.468707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:50:40.485822 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:50:40.525822 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:50:40.545383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:50:40.572514 systemd-networkd[881]: lo: Link UP Jan 15 12:50:40.572525 systemd-networkd[881]: lo: Gained carrier Jan 15 12:50:40.574127 systemd-networkd[881]: Enumeration completed Jan 15 12:50:40.574246 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:50:40.576786 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:40.576789 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:50:40.586169 systemd[1]: Reached target network.target - Network. Jan 15 12:50:40.670216 kernel: mlx5_core 935c:00:02.0 enP37724s1: Link up Jan 15 12:50:40.713206 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: Data path switched to VF: enP37724s1 Jan 15 12:50:40.713733 systemd-networkd[881]: enP37724s1: Link UP Jan 15 12:50:40.713825 systemd-networkd[881]: eth0: Link UP Jan 15 12:50:40.713923 systemd-networkd[881]: eth0: Gained carrier Jan 15 12:50:40.713932 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:40.728533 systemd-networkd[881]: enP37724s1: Gained carrier Jan 15 12:50:40.747235 systemd-networkd[881]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:50:41.575182 ignition[832]: Ignition 2.19.0 Jan 15 12:50:41.575763 ignition[832]: Stage: fetch-offline Jan 15 12:50:41.579816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:50:41.575824 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.575832 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.575951 ignition[832]: parsed url from cmdline: "" Jan 15 12:50:41.600512 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:50:41.575954 ignition[832]: no config URL provided Jan 15 12:50:41.575959 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:50:41.575966 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:50:41.575972 ignition[832]: failed to fetch config: resource requires networking Jan 15 12:50:41.576163 ignition[832]: Ignition finished successfully Jan 15 12:50:41.623901 ignition[890]: Ignition 2.19.0 Jan 15 12:50:41.623908 ignition[890]: Stage: fetch Jan 15 12:50:41.624083 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.624092 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.624206 ignition[890]: parsed url from cmdline: "" Jan 15 12:50:41.624210 ignition[890]: no config URL provided Jan 15 12:50:41.624214 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:50:41.624222 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:50:41.624242 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:50:41.731947 ignition[890]: GET result: OK Jan 15 12:50:41.732048 ignition[890]: config has been read from IMDS userdata Jan 15 12:50:41.732089 ignition[890]: parsing config with SHA512: c12c29b015a2276a67c829af7b6375cf12d03856ea5895b91759ad9752523a6b6cf71d5819929620764ab4bd0b3fd069d2caa107f1498c2b3492f97e695e56c9 Jan 15 12:50:41.736078 unknown[890]: fetched base config from "system" Jan 15 12:50:41.736541 ignition[890]: fetch: fetch complete Jan 15 12:50:41.736085 unknown[890]: fetched base config from "system" Jan 15 12:50:41.736545 ignition[890]: fetch: fetch passed Jan 15 12:50:41.736090 unknown[890]: fetched user config from "azure" Jan 15 12:50:41.736587 ignition[890]: Ignition finished successfully Jan 15 12:50:41.745334 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:50:41.780417 ignition[897]: Ignition 2.19.0 Jan 15 12:50:41.759463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:50:41.780432 ignition[897]: Stage: kargs Jan 15 12:50:41.787210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:50:41.780613 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.780621 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.781802 ignition[897]: kargs: kargs passed Jan 15 12:50:41.781860 ignition[897]: Ignition finished successfully Jan 15 12:50:41.815405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:50:41.835159 ignition[903]: Ignition 2.19.0 Jan 15 12:50:41.835171 ignition[903]: Stage: disks Jan 15 12:50:41.841232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:50:41.835383 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.848914 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:50:41.835393 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.859422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:50:41.836466 ignition[903]: disks: disks passed Jan 15 12:50:41.870126 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:50:41.836516 ignition[903]: Ignition finished successfully Jan 15 12:50:41.881116 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:50:41.886546 systemd-networkd[881]: enP37724s1: Gained IPv6LL Jan 15 12:50:41.898232 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:50:41.921493 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:50:41.998597 systemd-fsck[911]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:50:42.008510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:50:42.026422 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:50:42.082550 systemd-networkd[881]: eth0: Gained IPv6LL Jan 15 12:50:42.087614 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:50:42.083721 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:50:42.092450 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:50:42.142273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:50:42.151562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:50:42.169776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:50:42.190844 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (922) Jan 15 12:50:42.190866 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:42.190532 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:50:42.219026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:42.219048 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:42.190571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:50:42.220355 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:50:42.241212 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:42.242471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:50:42.259362 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:50:42.742327 coreos-metadata[924]: Jan 15 12:50:42.742 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:50:42.752072 coreos-metadata[924]: Jan 15 12:50:42.752 INFO Fetch successful Jan 15 12:50:42.757610 coreos-metadata[924]: Jan 15 12:50:42.757 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:50:42.782569 coreos-metadata[924]: Jan 15 12:50:42.782 INFO Fetch successful Jan 15 12:50:42.801242 coreos-metadata[924]: Jan 15 12:50:42.801 INFO wrote hostname ci-4081.3.0-a-b64d8040ed to /sysroot/etc/hostname Jan 15 12:50:42.809479 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:50:43.055827 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:50:43.098232 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:50:43.131730 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:50:43.141640 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:50:44.034090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:50:44.051406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:50:44.065433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:50:44.083007 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:44.078519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:50:44.108019 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:50:44.117338 ignition[1040]: INFO : Ignition 2.19.0 Jan 15 12:50:44.117338 ignition[1040]: INFO : Stage: mount Jan 15 12:50:44.117338 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:44.117338 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:44.117338 ignition[1040]: INFO : mount: mount passed Jan 15 12:50:44.117338 ignition[1040]: INFO : Ignition finished successfully Jan 15 12:50:44.120554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:50:44.133444 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:50:44.154478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:50:44.202985 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1051) Jan 15 12:50:44.203035 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:44.209123 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:44.213305 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:44.220219 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:44.221797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:50:44.253394 ignition[1068]: INFO : Ignition 2.19.0 Jan 15 12:50:44.253394 ignition[1068]: INFO : Stage: files Jan 15 12:50:44.261843 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:44.261843 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:44.261843 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:50:44.282871 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:50:44.282871 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:50:44.332744 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:50:44.340185 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:50:44.340185 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:50:44.333152 unknown[1068]: wrote ssh authorized keys file for user: core Jan 15 12:50:44.360445 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:50:45.806070 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 12:50:46.625141 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 15 12:50:46.955328 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 12:50:47.169693 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:47.169693 ignition[1068]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 15 12:50:47.202878 ignition[1068]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: files passed Jan 15 12:50:47.218514 ignition[1068]: INFO : Ignition finished successfully Jan 15 12:50:47.219485 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:50:47.260480 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:50:47.279381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:50:47.382731 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.382731 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.298955 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:50:47.405632 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.299050 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:50:47.314060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:50:47.339022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:50:47.355503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:50:47.417497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:50:47.417651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:50:47.426737 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:50:47.439290 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:50:47.450556 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:50:47.453414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:50:47.500122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:50:47.522496 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:50:47.541618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:50:47.548236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:50:47.560583 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:50:47.571460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:50:47.571626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:50:47.587491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:50:47.599338 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:50:47.609476 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:50:47.619916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:50:47.631579 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:50:47.643365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:50:47.654818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:50:47.666505 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:50:47.678776 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:50:47.689440 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:50:47.698758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:50:47.698954 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:50:47.713821 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:50:47.725119 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:50:47.738531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:50:47.744515 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:50:47.751609 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:50:47.751780 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:50:47.769600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:50:47.769777 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:50:47.781469 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:50:47.781625 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:50:47.793164 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:50:47.793337 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:50:47.827342 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:50:47.851506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:50:47.863867 ignition[1121]: INFO : Ignition 2.19.0 Jan 15 12:50:47.863867 ignition[1121]: INFO : Stage: umount Jan 15 12:50:47.863867 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:47.863867 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:47.863867 ignition[1121]: INFO : umount: umount passed Jan 15 12:50:47.863867 ignition[1121]: INFO : Ignition finished successfully Jan 15 12:50:47.857422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:50:47.857650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:50:47.870660 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:50:47.870815 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:50:47.882816 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:50:47.884223 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:50:47.893045 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:50:47.894217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:50:47.906350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:50:47.906407 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:50:47.915912 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:50:47.915959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:50:47.931313 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:50:47.931368 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:50:47.942482 systemd[1]: Stopped target network.target - Network. Jan 15 12:50:47.953473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:50:47.953559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:50:47.965645 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:50:47.975292 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:50:47.986259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:50:47.996571 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:50:48.006788 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:50:48.017636 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:50:48.017687 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:50:48.028794 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:50:48.028836 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:50:48.038852 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:50:48.038906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:50:48.049001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:50:48.049049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:50:48.066730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:50:48.076630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:50:48.086458 systemd-networkd[881]: eth0: DHCPv6 lease lost Jan 15 12:50:48.280380 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: Data path switched from VF: enP37724s1 Jan 15 12:50:48.088725 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:50:48.089960 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:50:48.090318 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:50:48.099040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:50:48.099085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:50:48.128400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:50:48.140620 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:50:48.140691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:50:48.152221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:50:48.171013 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:50:48.171122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:50:48.196154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:50:48.196279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:50:48.207373 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:50:48.207446 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:50:48.219234 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:50:48.219293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:50:48.231329 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:50:48.231479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:50:48.245185 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:50:48.245470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:50:48.257072 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:50:48.257114 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:50:48.276631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:50:48.276690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:50:48.290485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:50:48.290554 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:50:48.306472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:50:48.306534 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:48.342474 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:50:48.358121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:50:48.358222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:50:48.375788 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:50:48.375846 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:50:48.386928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:50:48.386974 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:50:48.398943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:48.398996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:48.410988 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:50:48.411101 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:50:48.421631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:50:48.421717 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:50:48.606182 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:50:48.606330 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:50:48.616768 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:50:48.626626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:50:48.626708 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:50:48.649480 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:50:48.767948 systemd[1]: Switching root. Jan 15 12:50:48.803034 systemd-journald[217]: Journal stopped Jan 15 12:50:36.303231 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:50:36.303254 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:50:36.303262 kernel: KASLR enabled Jan 15 12:50:36.303268 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:50:36.303275 kernel: printk: bootconsole [pl11] enabled Jan 15 12:50:36.303281 kernel: efi: EFI v2.7 by EDK II Jan 15 12:50:36.303288 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:50:36.303294 kernel: random: crng init done Jan 15 12:50:36.303300 kernel: ACPI: Early table checksum verification disabled Jan 15 12:50:36.303306 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:50:36.303312 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303318 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303325 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:50:36.303331 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303339 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303345 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303352 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303360 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303366 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303372 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:50:36.303379 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:50:36.303385 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:50:36.303392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:50:36.303399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:50:36.303405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:50:36.303411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:50:36.303418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:50:36.303424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:50:36.303432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:50:36.303438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:50:36.303444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:50:36.303451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:50:36.303457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:50:36.303464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:50:36.303470 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:50:36.303484 kernel: Zone ranges: Jan 15 12:50:36.303491 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:50:36.303497 kernel: DMA32 empty Jan 15 12:50:36.303504 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:50:36.303511 kernel: Movable zone start for each node Jan 15 12:50:36.303521 kernel: Early memory node ranges Jan 15 12:50:36.303528 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:50:36.303535 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:50:36.303541 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:50:36.303548 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:50:36.303556 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:50:36.303563 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:50:36.303570 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:50:36.303577 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:50:36.303583 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:50:36.303590 kernel: psci: probing for conduit method from ACPI. Jan 15 12:50:36.303597 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:50:36.303604 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:50:36.303610 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:50:36.303617 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:50:36.303624 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:50:36.303630 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:50:36.305388 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:50:36.305405 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:50:36.305412 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:50:36.305419 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:50:36.305426 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:50:36.305433 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:50:36.305440 kernel: CPU features: detected: Spectre-BHB Jan 15 12:50:36.305447 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:50:36.305454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:50:36.305461 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:50:36.305468 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:50:36.305480 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:50:36.305487 kernel: alternatives: applying boot alternatives Jan 15 12:50:36.305495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:50:36.305503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:50:36.305510 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:50:36.305517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:50:36.305523 kernel: Fallback order for Node 0: 0 Jan 15 12:50:36.305530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:50:36.305537 kernel: Policy zone: Normal Jan 15 12:50:36.305543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:50:36.305550 kernel: software IO TLB: area num 2. Jan 15 12:50:36.305559 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:50:36.305566 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:50:36.305573 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:50:36.305580 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:50:36.305587 kernel: rcu: RCU event tracing is enabled. Jan 15 12:50:36.305594 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:50:36.305601 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:50:36.305607 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:50:36.305614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:50:36.305621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:50:36.305628 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:50:36.305650 kernel: GICv3: 960 SPIs implemented Jan 15 12:50:36.305659 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:50:36.305665 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:50:36.305672 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:50:36.305679 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:50:36.305685 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:50:36.305692 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:50:36.305699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:50:36.305706 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:50:36.305713 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:50:36.305720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:50:36.305728 kernel: Console: colour dummy device 80x25 Jan 15 12:50:36.305736 kernel: printk: console [tty1] enabled Jan 15 12:50:36.305743 kernel: ACPI: Core revision 20230628 Jan 15 12:50:36.305750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:50:36.305757 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:50:36.305764 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:50:36.305771 kernel: landlock: Up and running. Jan 15 12:50:36.305777 kernel: SELinux: Initializing. Jan 15 12:50:36.305784 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.305792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.305800 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:50:36.305807 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:50:36.305815 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:50:36.305822 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:50:36.305828 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:50:36.305835 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:50:36.305842 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:50:36.305856 kernel: Remapping and enabling EFI services. Jan 15 12:50:36.305863 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:50:36.305870 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:50:36.305877 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:50:36.305886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:50:36.305893 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:50:36.305901 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:50:36.305908 kernel: SMP: Total of 2 processors activated. Jan 15 12:50:36.305915 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:50:36.305924 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:50:36.305932 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:50:36.305939 kernel: CPU features: detected: CRC32 instructions Jan 15 12:50:36.305946 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:50:36.305954 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:50:36.305961 kernel: CPU features: detected: Privileged Access Never Jan 15 12:50:36.305968 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:50:36.305975 kernel: alternatives: applying system-wide alternatives Jan 15 12:50:36.305983 kernel: devtmpfs: initialized Jan 15 12:50:36.305992 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:50:36.305999 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:50:36.306006 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:50:36.306014 kernel: SMBIOS 3.1.0 present. Jan 15 12:50:36.306021 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:50:36.306028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:50:36.306036 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:50:36.306043 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:50:36.306051 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:50:36.306060 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:50:36.306067 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:50:36.306074 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:50:36.306082 kernel: cpuidle: using governor menu Jan 15 12:50:36.306089 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:50:36.306096 kernel: ASID allocator initialised with 32768 entries Jan 15 12:50:36.306104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:50:36.306111 kernel: Serial: AMBA PL011 UART driver Jan 15 12:50:36.306118 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:50:36.306127 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:50:36.306134 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:50:36.306142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:50:36.306150 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:50:36.306157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:50:36.306164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:50:36.306172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:50:36.306179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:50:36.306186 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:50:36.306195 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:50:36.306202 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:50:36.306210 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:50:36.306217 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:50:36.306225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:50:36.306232 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:50:36.306239 kernel: ACPI: Interpreter enabled Jan 15 12:50:36.306247 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:50:36.306254 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:50:36.306263 kernel: printk: console [ttyAMA0] enabled Jan 15 12:50:36.306270 kernel: printk: bootconsole [pl11] disabled Jan 15 12:50:36.306278 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:50:36.306285 kernel: iommu: Default domain type: Translated Jan 15 12:50:36.306292 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:50:36.306299 kernel: efivars: Registered efivars operations Jan 15 12:50:36.306307 kernel: vgaarb: loaded Jan 15 12:50:36.306314 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:50:36.306321 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:50:36.306330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:50:36.306337 kernel: pnp: PnP ACPI init Jan 15 12:50:36.306345 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:50:36.306352 kernel: NET: Registered PF_INET protocol family Jan 15 12:50:36.306359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:50:36.306367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:50:36.306374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:50:36.306381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:50:36.306389 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:50:36.306397 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:50:36.306405 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.306413 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:50:36.306420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:50:36.306427 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:50:36.306434 kernel: kvm [1]: HYP mode not available Jan 15 12:50:36.306441 kernel: Initialise system trusted keyrings Jan 15 12:50:36.306449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:50:36.306456 kernel: Key type asymmetric registered Jan 15 12:50:36.306465 kernel: Asymmetric key parser 'x509' registered Jan 15 12:50:36.306472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:50:36.306479 kernel: io scheduler mq-deadline registered Jan 15 12:50:36.306487 kernel: io scheduler kyber registered Jan 15 12:50:36.306494 kernel: io scheduler bfq registered Jan 15 12:50:36.306501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:50:36.306508 kernel: thunder_xcv, ver 1.0 Jan 15 12:50:36.306515 kernel: thunder_bgx, ver 1.0 Jan 15 12:50:36.306523 kernel: nicpf, ver 1.0 Jan 15 12:50:36.306530 kernel: nicvf, ver 1.0 Jan 15 12:50:36.306691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:50:36.306767 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:50:35 UTC (1736945435) Jan 15 12:50:36.306778 kernel: efifb: probing for efifb Jan 15 12:50:36.306786 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:50:36.306793 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:50:36.306800 kernel: efifb: scrolling: redraw Jan 15 12:50:36.306808 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:50:36.306818 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:50:36.306825 kernel: fb0: EFI VGA frame buffer device Jan 15 12:50:36.306833 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:50:36.306840 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:50:36.306847 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:50:36.306855 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:50:36.306862 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:50:36.306869 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:50:36.306877 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:50:36.306886 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:50:36.306893 kernel: Segment Routing with IPv6 Jan 15 12:50:36.306901 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:50:36.306908 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:50:36.306915 kernel: Key type dns_resolver registered Jan 15 12:50:36.306923 kernel: registered taskstats version 1 Jan 15 12:50:36.306930 kernel: Loading compiled-in X.509 certificates Jan 15 12:50:36.306937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:50:36.306945 kernel: Key type .fscrypt registered Jan 15 12:50:36.306953 kernel: Key type fscrypt-provisioning registered Jan 15 12:50:36.306961 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:50:36.306968 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:50:36.306975 kernel: ima: No architecture policies found Jan 15 12:50:36.306983 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:50:36.306990 kernel: clk: Disabling unused clocks Jan 15 12:50:36.306997 kernel: Freeing unused kernel memory: 39360K Jan 15 12:50:36.307005 kernel: Run /init as init process Jan 15 12:50:36.307012 kernel: with arguments: Jan 15 12:50:36.307021 kernel: /init Jan 15 12:50:36.307028 kernel: with environment: Jan 15 12:50:36.307035 kernel: HOME=/ Jan 15 12:50:36.307042 kernel: TERM=linux Jan 15 12:50:36.307049 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:50:36.307059 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:50:36.307068 systemd[1]: Detected virtualization microsoft. Jan 15 12:50:36.307076 systemd[1]: Detected architecture arm64. Jan 15 12:50:36.307086 systemd[1]: Running in initrd. Jan 15 12:50:36.307093 systemd[1]: No hostname configured, using default hostname. Jan 15 12:50:36.307101 systemd[1]: Hostname set to . Jan 15 12:50:36.307109 systemd[1]: Initializing machine ID from random generator. Jan 15 12:50:36.307117 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:50:36.307125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:50:36.307133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:50:36.307141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:50:36.307151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:50:36.307159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:50:36.307167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:50:36.307177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:50:36.307185 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:50:36.307193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:50:36.307203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:50:36.307211 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:50:36.307219 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:50:36.307227 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:50:36.307235 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:50:36.307243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:50:36.307251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:50:36.307259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:50:36.307267 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:50:36.307276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:50:36.307284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:50:36.307292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:50:36.307300 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:50:36.307308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:50:36.307316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:50:36.307324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:50:36.307331 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:50:36.307339 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:50:36.307349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:50:36.307376 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:50:36.307396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:36.307404 systemd-journald[217]: Journal started Jan 15 12:50:36.307425 systemd-journald[217]: Runtime Journal (/run/log/journal/35789dc8cf94409bb65e3648a850b131) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:50:36.319128 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:50:36.351469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:50:36.351493 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:50:36.351514 kernel: Bridge firewalling registered Jan 15 12:50:36.351727 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:50:36.362717 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:50:36.368845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:50:36.381097 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:50:36.392655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:50:36.403659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:36.421888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:36.429823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:50:36.453819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:50:36.469551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:50:36.478989 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:36.500799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:50:36.507252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:50:36.530822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:50:36.539866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:50:36.557357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:50:36.576501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:50:36.591086 dracut-cmdline[249]: dracut-dracut-053 Jan 15 12:50:36.596264 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:50:36.627871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:50:36.659363 systemd-resolved[263]: Positive Trust Anchors: Jan 15 12:50:36.659379 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:50:36.659410 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:50:36.666055 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 15 12:50:36.667056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:50:36.681383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:50:36.744658 kernel: SCSI subsystem initialized Jan 15 12:50:36.751655 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:50:36.762681 kernel: iscsi: registered transport (tcp) Jan 15 12:50:36.779889 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:50:36.779932 kernel: QLogic iSCSI HBA Driver Jan 15 12:50:36.821146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:50:36.834781 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:50:36.857661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:50:36.857697 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:50:36.867071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:50:36.917665 kernel: raid6: neonx8 gen() 15783 MB/s Jan 15 12:50:36.937651 kernel: raid6: neonx4 gen() 15656 MB/s Jan 15 12:50:36.957651 kernel: raid6: neonx2 gen() 13215 MB/s Jan 15 12:50:36.978651 kernel: raid6: neonx1 gen() 10480 MB/s Jan 15 12:50:36.998653 kernel: raid6: int64x8 gen() 6958 MB/s Jan 15 12:50:37.018649 kernel: raid6: int64x4 gen() 7346 MB/s Jan 15 12:50:37.039655 kernel: raid6: int64x2 gen() 6118 MB/s Jan 15 12:50:37.062997 kernel: raid6: int64x1 gen() 5059 MB/s Jan 15 12:50:37.063009 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 15 12:50:37.086640 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Jan 15 12:50:37.086657 kernel: raid6: using neon recovery algorithm Jan 15 12:50:37.095655 kernel: xor: measuring software checksum speed Jan 15 12:50:37.102158 kernel: 8regs : 18223 MB/sec Jan 15 12:50:37.102170 kernel: 32regs : 19674 MB/sec Jan 15 12:50:37.105803 kernel: arm64_neon : 26954 MB/sec Jan 15 12:50:37.109781 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 15 12:50:37.160700 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:50:37.171874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:50:37.188814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:50:37.211537 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:50:37.214767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:50:37.240417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:50:37.257588 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 15 12:50:37.285869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:50:37.299947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:50:37.334992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:50:37.355292 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:50:37.379733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:50:37.393575 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:50:37.402354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:50:37.421808 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:50:37.442816 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:50:37.453967 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:50:37.480512 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:50:37.486259 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:50:37.486293 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:50:37.492098 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:50:37.508944 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:50:37.508968 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:50:37.508978 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:50:37.527328 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:50:37.534807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:50:37.547889 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:50:37.547914 kernel: scsi host1: storvsc_host_t Jan 15 12:50:37.540567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:37.587072 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:50:37.587118 kernel: scsi host0: storvsc_host_t Jan 15 12:50:37.587310 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:50:37.587331 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:50:37.565270 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:37.571621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:37.571795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.593702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.631684 kernel: PTP clock support registered Jan 15 12:50:37.632912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.602017 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:50:37.622651 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:50:37.622705 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:50:37.622714 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:50:37.622726 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:50:37.622734 systemd-journald[217]: Time jumped backwards, rotating. Jan 15 12:50:37.622776 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: VF slot 1 added Jan 15 12:50:37.597052 systemd-resolved[263]: Clock change detected. Flushing caches. Jan 15 12:50:37.607372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:37.609398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.657735 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:50:37.657782 kernel: hv_pci c8876571-935c-4c64-b13c-50562eed65b1: PCI VMBus probing: Using version 0x10004 Jan 15 12:50:37.768122 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 15 12:50:37.768285 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:50:37.768303 kernel: hv_pci c8876571-935c-4c64-b13c-50562eed65b1: PCI host bridge to bus 935c:00 Jan 15 12:50:37.768391 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:50:37.768479 kernel: pci_bus 935c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:50:37.768573 kernel: pci_bus 935c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:50:37.768650 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:50:37.768741 kernel: pci 935c:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:50:37.768835 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:50:37.768919 kernel: pci 935c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:50:37.769002 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 15 12:50:37.769083 kernel: pci 935c:00:02.0: enabling Extended Tags Jan 15 12:50:37.769165 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:50:37.769269 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:50:37.769354 kernel: pci 935c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 935c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:50:37.769437 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:37.769446 kernel: pci_bus 935c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:50:37.769521 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 15 12:50:37.769604 kernel: pci 935c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:50:37.658178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:37.722649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:37.761388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:50:37.806361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:37.835513 kernel: mlx5_core 935c:00:02.0: enabling device (0000 -> 0002) Jan 15 12:50:38.051377 kernel: mlx5_core 935c:00:02.0: firmware version: 16.30.1284 Jan 15 12:50:38.051514 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: VF registering: eth1 Jan 15 12:50:38.051608 kernel: mlx5_core 935c:00:02.0 eth1: joined to eth0 Jan 15 12:50:38.051700 kernel: mlx5_core 935c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:50:38.061213 kernel: mlx5_core 935c:00:02.0 enP37724s1: renamed from eth1 Jan 15 12:50:38.380897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:50:38.407220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (489) Jan 15 12:50:38.421934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:50:38.444218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:50:38.521264 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (515) Jan 15 12:50:38.534778 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:50:38.541401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:50:38.573449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:50:38.599237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:38.609215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:39.616255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:50:39.616897 disk-uuid[610]: The operation has completed successfully. Jan 15 12:50:39.674610 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:50:39.674704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:50:39.704340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:50:39.716325 sh[697]: Success Jan 15 12:50:39.747229 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:50:39.962664 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:50:39.974122 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:50:39.984337 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:50:40.019631 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:50:40.019706 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:40.026724 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:50:40.031765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:50:40.035953 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:50:40.357704 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:50:40.363001 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:50:40.382526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:50:40.395377 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:50:40.415549 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:40.415574 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:40.415592 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:40.438312 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:40.446788 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:50:40.460215 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:40.468707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:50:40.485822 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:50:40.525822 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:50:40.545383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:50:40.572514 systemd-networkd[881]: lo: Link UP Jan 15 12:50:40.572525 systemd-networkd[881]: lo: Gained carrier Jan 15 12:50:40.574127 systemd-networkd[881]: Enumeration completed Jan 15 12:50:40.574246 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:50:40.576786 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:40.576789 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:50:40.586169 systemd[1]: Reached target network.target - Network. Jan 15 12:50:40.670216 kernel: mlx5_core 935c:00:02.0 enP37724s1: Link up Jan 15 12:50:40.713206 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: Data path switched to VF: enP37724s1 Jan 15 12:50:40.713733 systemd-networkd[881]: enP37724s1: Link UP Jan 15 12:50:40.713825 systemd-networkd[881]: eth0: Link UP Jan 15 12:50:40.713923 systemd-networkd[881]: eth0: Gained carrier Jan 15 12:50:40.713932 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:40.728533 systemd-networkd[881]: enP37724s1: Gained carrier Jan 15 12:50:40.747235 systemd-networkd[881]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:50:41.575182 ignition[832]: Ignition 2.19.0 Jan 15 12:50:41.575763 ignition[832]: Stage: fetch-offline Jan 15 12:50:41.579816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:50:41.575824 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.575832 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.575951 ignition[832]: parsed url from cmdline: "" Jan 15 12:50:41.600512 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:50:41.575954 ignition[832]: no config URL provided Jan 15 12:50:41.575959 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:50:41.575966 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:50:41.575972 ignition[832]: failed to fetch config: resource requires networking Jan 15 12:50:41.576163 ignition[832]: Ignition finished successfully Jan 15 12:50:41.623901 ignition[890]: Ignition 2.19.0 Jan 15 12:50:41.623908 ignition[890]: Stage: fetch Jan 15 12:50:41.624083 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.624092 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.624206 ignition[890]: parsed url from cmdline: "" Jan 15 12:50:41.624210 ignition[890]: no config URL provided Jan 15 12:50:41.624214 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:50:41.624222 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:50:41.624242 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:50:41.731947 ignition[890]: GET result: OK Jan 15 12:50:41.732048 ignition[890]: config has been read from IMDS userdata Jan 15 12:50:41.732089 ignition[890]: parsing config with SHA512: c12c29b015a2276a67c829af7b6375cf12d03856ea5895b91759ad9752523a6b6cf71d5819929620764ab4bd0b3fd069d2caa107f1498c2b3492f97e695e56c9 Jan 15 12:50:41.736078 unknown[890]: fetched base config from "system" Jan 15 12:50:41.736541 ignition[890]: fetch: fetch complete Jan 15 12:50:41.736085 unknown[890]: fetched base config from "system" Jan 15 12:50:41.736545 ignition[890]: fetch: fetch passed Jan 15 12:50:41.736090 unknown[890]: fetched user config from "azure" Jan 15 12:50:41.736587 ignition[890]: Ignition finished successfully Jan 15 12:50:41.745334 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:50:41.780417 ignition[897]: Ignition 2.19.0 Jan 15 12:50:41.759463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:50:41.780432 ignition[897]: Stage: kargs Jan 15 12:50:41.787210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:50:41.780613 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.780621 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.781802 ignition[897]: kargs: kargs passed Jan 15 12:50:41.781860 ignition[897]: Ignition finished successfully Jan 15 12:50:41.815405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:50:41.835159 ignition[903]: Ignition 2.19.0 Jan 15 12:50:41.835171 ignition[903]: Stage: disks Jan 15 12:50:41.841232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:50:41.835383 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:41.848914 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:50:41.835393 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:41.859422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:50:41.836466 ignition[903]: disks: disks passed Jan 15 12:50:41.870126 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:50:41.836516 ignition[903]: Ignition finished successfully Jan 15 12:50:41.881116 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:50:41.886546 systemd-networkd[881]: enP37724s1: Gained IPv6LL Jan 15 12:50:41.898232 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:50:41.921493 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:50:41.998597 systemd-fsck[911]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:50:42.008510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:50:42.026422 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:50:42.082550 systemd-networkd[881]: eth0: Gained IPv6LL Jan 15 12:50:42.087614 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:50:42.083721 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:50:42.092450 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:50:42.142273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:50:42.151562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:50:42.169776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:50:42.190844 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (922) Jan 15 12:50:42.190866 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:42.190532 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:50:42.219026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:42.219048 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:42.190571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:50:42.220355 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:50:42.241212 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:42.242471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:50:42.259362 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:50:42.742327 coreos-metadata[924]: Jan 15 12:50:42.742 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:50:42.752072 coreos-metadata[924]: Jan 15 12:50:42.752 INFO Fetch successful Jan 15 12:50:42.757610 coreos-metadata[924]: Jan 15 12:50:42.757 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:50:42.782569 coreos-metadata[924]: Jan 15 12:50:42.782 INFO Fetch successful Jan 15 12:50:42.801242 coreos-metadata[924]: Jan 15 12:50:42.801 INFO wrote hostname ci-4081.3.0-a-b64d8040ed to /sysroot/etc/hostname Jan 15 12:50:42.809479 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:50:43.055827 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:50:43.098232 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:50:43.131730 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:50:43.141640 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:50:44.034090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:50:44.051406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:50:44.065433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:50:44.083007 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:44.078519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:50:44.108019 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:50:44.117338 ignition[1040]: INFO : Ignition 2.19.0 Jan 15 12:50:44.117338 ignition[1040]: INFO : Stage: mount Jan 15 12:50:44.117338 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:44.117338 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:44.117338 ignition[1040]: INFO : mount: mount passed Jan 15 12:50:44.117338 ignition[1040]: INFO : Ignition finished successfully Jan 15 12:50:44.120554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:50:44.133444 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:50:44.154478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:50:44.202985 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1051) Jan 15 12:50:44.203035 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:50:44.209123 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:50:44.213305 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:50:44.220219 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:50:44.221797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:50:44.253394 ignition[1068]: INFO : Ignition 2.19.0 Jan 15 12:50:44.253394 ignition[1068]: INFO : Stage: files Jan 15 12:50:44.261843 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:44.261843 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:44.261843 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:50:44.282871 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:50:44.282871 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:50:44.332744 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:50:44.340185 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:50:44.340185 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:50:44.333152 unknown[1068]: wrote ssh authorized keys file for user: core Jan 15 12:50:44.360445 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:50:44.369861 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:50:45.806070 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 12:50:46.625141 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:46.636418 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 15 12:50:46.955328 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 12:50:47.169693 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:50:47.169693 ignition[1068]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 15 12:50:47.202878 ignition[1068]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:50:47.218514 ignition[1068]: INFO : files: files passed Jan 15 12:50:47.218514 ignition[1068]: INFO : Ignition finished successfully Jan 15 12:50:47.219485 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:50:47.260480 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:50:47.279381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:50:47.382731 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.382731 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.298955 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:50:47.405632 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:50:47.299050 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:50:47.314060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:50:47.339022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:50:47.355503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:50:47.417497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:50:47.417651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:50:47.426737 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:50:47.439290 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:50:47.450556 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:50:47.453414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:50:47.500122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:50:47.522496 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:50:47.541618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:50:47.548236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:50:47.560583 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:50:47.571460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:50:47.571626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:50:47.587491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:50:47.599338 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:50:47.609476 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:50:47.619916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:50:47.631579 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:50:47.643365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:50:47.654818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:50:47.666505 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:50:47.678776 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:50:47.689440 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:50:47.698758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:50:47.698954 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:50:47.713821 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:50:47.725119 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:50:47.738531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:50:47.744515 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:50:47.751609 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:50:47.751780 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:50:47.769600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:50:47.769777 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:50:47.781469 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:50:47.781625 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:50:47.793164 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:50:47.793337 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:50:47.827342 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:50:47.851506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:50:47.863867 ignition[1121]: INFO : Ignition 2.19.0 Jan 15 12:50:47.863867 ignition[1121]: INFO : Stage: umount Jan 15 12:50:47.863867 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:50:47.863867 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:50:47.863867 ignition[1121]: INFO : umount: umount passed Jan 15 12:50:47.863867 ignition[1121]: INFO : Ignition finished successfully Jan 15 12:50:47.857422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:50:47.857650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:50:47.870660 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:50:47.870815 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:50:47.882816 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:50:47.884223 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:50:47.893045 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:50:47.894217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:50:47.906350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:50:47.906407 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:50:47.915912 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:50:47.915959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:50:47.931313 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:50:47.931368 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:50:47.942482 systemd[1]: Stopped target network.target - Network. Jan 15 12:50:47.953473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:50:47.953559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:50:47.965645 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:50:47.975292 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:50:47.986259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:50:47.996571 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:50:48.006788 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:50:48.017636 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:50:48.017687 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:50:48.028794 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:50:48.028836 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:50:48.038852 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:50:48.038906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:50:48.049001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:50:48.049049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:50:48.066730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:50:48.076630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:50:48.086458 systemd-networkd[881]: eth0: DHCPv6 lease lost Jan 15 12:50:48.280380 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: Data path switched from VF: enP37724s1 Jan 15 12:50:48.088725 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:50:48.089960 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:50:48.090318 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:50:48.099040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:50:48.099085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:50:48.128400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:50:48.140620 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:50:48.140691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:50:48.152221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:50:48.171013 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:50:48.171122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:50:48.196154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:50:48.196279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:50:48.207373 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:50:48.207446 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:50:48.219234 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:50:48.219293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:50:48.231329 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:50:48.231479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:50:48.245185 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:50:48.245470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:50:48.257072 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:50:48.257114 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:50:48.276631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:50:48.276690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:50:48.290485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:50:48.290554 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:50:48.306472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:50:48.306534 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:50:48.342474 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:50:48.358121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:50:48.358222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:50:48.375788 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:50:48.375846 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:50:48.386928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:50:48.386974 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:50:48.398943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:48.398996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:48.410988 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:50:48.411101 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:50:48.421631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:50:48.421717 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:50:48.606182 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:50:48.606330 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:50:48.616768 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:50:48.626626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:50:48.626708 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:50:48.649480 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:50:48.767948 systemd[1]: Switching root. Jan 15 12:50:48.803034 systemd-journald[217]: Journal stopped Jan 15 12:50:53.197738 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:50:53.197762 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 12:50:53.197773 kernel: SELinux: policy capability open_perms=1 Jan 15 12:50:53.197783 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 12:50:53.197791 kernel: SELinux: policy capability always_check_network=0 Jan 15 12:50:53.197798 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 12:50:53.197807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 12:50:53.197815 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 12:50:53.197823 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 12:50:53.197831 kernel: audit: type=1403 audit(1736945450.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 12:50:53.197841 systemd[1]: Successfully loaded SELinux policy in 154.493ms. Jan 15 12:50:53.197851 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.089ms. Jan 15 12:50:53.197861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:50:53.197870 systemd[1]: Detected virtualization microsoft. Jan 15 12:50:53.197879 systemd[1]: Detected architecture arm64. Jan 15 12:50:53.197890 systemd[1]: Detected first boot. Jan 15 12:50:53.197899 systemd[1]: Hostname set to . Jan 15 12:50:53.197908 systemd[1]: Initializing machine ID from random generator. Jan 15 12:50:53.197917 zram_generator::config[1181]: No configuration found. Jan 15 12:50:53.197927 systemd[1]: Populated /etc with preset unit settings. Jan 15 12:50:53.197936 systemd[1]: Queued start job for default target multi-user.target. Jan 15 12:50:53.197946 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 12:50:53.197956 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 12:50:53.197965 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 12:50:53.197974 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 12:50:53.197984 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 12:50:53.197993 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 12:50:53.198004 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 12:50:53.198015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 12:50:53.198024 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 12:50:53.198033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:50:53.198043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:50:53.198052 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 12:50:53.198061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 12:50:53.198070 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 12:50:53.198080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:50:53.198089 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 12:50:53.198099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:50:53.198109 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 12:50:53.198118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:50:53.198129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:50:53.198139 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:50:53.198148 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:50:53.198157 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 12:50:53.198168 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 12:50:53.198178 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:50:53.198187 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:50:53.200258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:50:53.200289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:50:53.200299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:50:53.200309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 12:50:53.200327 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 12:50:53.200337 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 12:50:53.200347 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 12:50:53.200356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 12:50:53.200366 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 12:50:53.200376 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 12:50:53.200387 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 12:50:53.200397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:50:53.200407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:50:53.200417 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 12:50:53.200427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:50:53.200436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:50:53.200446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:50:53.200455 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 12:50:53.200465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:50:53.200477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 12:50:53.200487 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 15 12:50:53.200497 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 15 12:50:53.200508 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:50:53.200518 kernel: fuse: init (API version 7.39) Jan 15 12:50:53.200527 kernel: loop: module loaded Jan 15 12:50:53.200536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:50:53.200546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 12:50:53.200557 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 12:50:53.200605 systemd-journald[1292]: Collecting audit messages is disabled. Jan 15 12:50:53.200626 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:50:53.200636 kernel: ACPI: bus type drm_connector registered Jan 15 12:50:53.200648 systemd-journald[1292]: Journal started Jan 15 12:50:53.200669 systemd-journald[1292]: Runtime Journal (/run/log/journal/6f40079888e042a3b48e1b70b4d0ff32) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:50:53.222335 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:50:53.227840 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 12:50:53.233572 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 12:50:53.239851 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 12:50:53.244778 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 12:50:53.250611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 12:50:53.258398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 12:50:53.263767 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 12:50:53.271442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:50:53.277947 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 12:50:53.278113 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 12:50:53.284711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:50:53.284868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:50:53.292000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:50:53.292174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:50:53.298339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:50:53.298497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:50:53.305113 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 12:50:53.305463 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 12:50:53.311126 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:50:53.311375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:50:53.317906 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:50:53.323935 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 12:50:53.330759 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 12:50:53.337938 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:50:53.356751 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 12:50:53.370289 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 12:50:53.377369 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 12:50:53.383885 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 12:50:53.403342 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 12:50:53.410631 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 12:50:53.417003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:50:53.418501 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 12:50:53.424622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:50:53.426071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:50:53.434788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:50:53.445366 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 12:50:53.457365 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 12:50:53.467747 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 12:50:53.475353 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 12:50:53.488455 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 12:50:53.495472 udevadm[1342]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 15 12:50:53.509090 systemd-journald[1292]: Time spent on flushing to /var/log/journal/6f40079888e042a3b48e1b70b4d0ff32 is 43.638ms for 892 entries. Jan 15 12:50:53.509090 systemd-journald[1292]: System Journal (/var/log/journal/6f40079888e042a3b48e1b70b4d0ff32) is 11.8M, max 2.6G, 2.6G free. Jan 15 12:50:53.615421 systemd-journald[1292]: Received client request to flush runtime journal. Jan 15 12:50:53.615480 systemd-journald[1292]: /var/log/journal/6f40079888e042a3b48e1b70b4d0ff32/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 15 12:50:53.615503 systemd-journald[1292]: Rotating system journal. Jan 15 12:50:53.566002 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 15 12:50:53.566015 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 15 12:50:53.573692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:50:53.581552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:50:53.596370 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 12:50:53.619600 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 12:50:53.677776 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 12:50:53.691422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:50:53.710332 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 15 12:50:53.710351 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 15 12:50:53.714416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:50:54.616754 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 12:50:54.628355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:50:54.656169 systemd-udevd[1368]: Using default interface naming scheme 'v255'. Jan 15 12:50:54.810049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:50:54.830725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:50:54.896045 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 15 12:50:54.924388 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 12:50:54.974218 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 12:50:54.991215 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 12:50:55.017384 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 12:50:55.017477 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 12:50:55.024156 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 12:50:55.029273 kernel: Console: switching to colour dummy device 80x25 Jan 15 12:50:55.038484 kernel: hv_vmbus: registering driver hv_balloon Jan 15 12:50:55.038566 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:50:55.045771 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 12:50:55.045857 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 12:50:55.066528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:55.086758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:50:55.087012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:55.102393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:50:55.152240 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1369) Jan 15 12:50:55.207896 systemd-networkd[1381]: lo: Link UP Jan 15 12:50:55.208368 systemd-networkd[1381]: lo: Gained carrier Jan 15 12:50:55.213371 systemd-networkd[1381]: Enumeration completed Jan 15 12:50:55.213745 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:50:55.214018 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:55.214023 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:50:55.225298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:50:55.233029 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 12:50:55.245455 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 12:50:55.253164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 12:50:55.265324 kernel: mlx5_core 935c:00:02.0 enP37724s1: Link up Jan 15 12:50:55.291242 kernel: hv_netvsc 000d3af6-a292-000d-3af6-a292000d3af6 eth0: Data path switched to VF: enP37724s1 Jan 15 12:50:55.292152 systemd-networkd[1381]: enP37724s1: Link UP Jan 15 12:50:55.292288 systemd-networkd[1381]: eth0: Link UP Jan 15 12:50:55.292291 systemd-networkd[1381]: eth0: Gained carrier Jan 15 12:50:55.292307 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:50:55.297521 systemd-networkd[1381]: enP37724s1: Gained carrier Jan 15 12:50:55.304237 systemd-networkd[1381]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:50:55.411239 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:50:55.441818 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 12:50:55.449082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:50:55.465464 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 12:50:55.472692 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:50:55.473018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:50:55.500312 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 12:50:55.508421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:50:55.515505 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 12:50:55.515666 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:50:55.521460 systemd[1]: Reached target machines.target - Containers. Jan 15 12:50:55.528270 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 12:50:55.541341 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 12:50:55.549156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 12:50:55.555419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:50:55.556753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 12:50:55.568461 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 12:50:55.577540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 12:50:55.585070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 12:50:55.644424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 12:50:55.657235 kernel: loop0: detected capacity change from 0 to 31320 Jan 15 12:50:55.676690 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 12:50:55.678338 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 12:50:56.048216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 12:50:56.126221 kernel: loop1: detected capacity change from 0 to 114328 Jan 15 12:50:56.349360 systemd-networkd[1381]: eth0: Gained IPv6LL Jan 15 12:50:56.351638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 12:50:56.458220 kernel: loop2: detected capacity change from 0 to 114432 Jan 15 12:50:56.776218 kernel: loop3: detected capacity change from 0 to 194512 Jan 15 12:50:56.808225 kernel: loop4: detected capacity change from 0 to 31320 Jan 15 12:50:56.818217 kernel: loop5: detected capacity change from 0 to 114328 Jan 15 12:50:56.826242 kernel: loop6: detected capacity change from 0 to 114432 Jan 15 12:50:56.834213 kernel: loop7: detected capacity change from 0 to 194512 Jan 15 12:50:56.837823 (sd-merge)[1488]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 12:50:56.838306 (sd-merge)[1488]: Merged extensions into '/usr'. Jan 15 12:50:56.841727 systemd[1]: Reloading requested from client PID 1473 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 12:50:56.841999 systemd[1]: Reloading... Jan 15 12:50:56.861537 systemd-networkd[1381]: enP37724s1: Gained IPv6LL Jan 15 12:50:56.896272 zram_generator::config[1515]: No configuration found. Jan 15 12:50:57.039022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:57.108466 systemd[1]: Reloading finished in 265 ms. Jan 15 12:50:57.121966 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 12:50:57.137324 systemd[1]: Starting ensure-sysext.service... Jan 15 12:50:57.144447 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:50:57.159725 systemd[1]: Reloading requested from client PID 1576 ('systemctl') (unit ensure-sysext.service)... Jan 15 12:50:57.159745 systemd[1]: Reloading... Jan 15 12:50:57.179697 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 12:50:57.179973 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 12:50:57.183003 systemd-tmpfiles[1577]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 12:50:57.183598 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jan 15 12:50:57.183721 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jan 15 12:50:57.189383 systemd-tmpfiles[1577]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:50:57.189508 systemd-tmpfiles[1577]: Skipping /boot Jan 15 12:50:57.196708 systemd-tmpfiles[1577]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:50:57.196719 systemd-tmpfiles[1577]: Skipping /boot Jan 15 12:50:57.233224 zram_generator::config[1606]: No configuration found. Jan 15 12:50:57.353082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:57.422741 systemd[1]: Reloading finished in 262 ms. Jan 15 12:50:57.436128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:50:57.450449 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:57.459346 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 12:50:57.468969 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 12:50:57.487369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:50:57.500379 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 12:50:57.515325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:50:57.522434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:50:57.531511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:50:57.558500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:50:57.572318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:50:57.573168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:50:57.573361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:50:57.580983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:50:57.581138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:50:57.590675 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:50:57.590898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:50:57.604334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:50:57.612622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:50:57.615789 systemd-resolved[1680]: Positive Trust Anchors: Jan 15 12:50:57.616130 systemd-resolved[1680]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:50:57.616240 systemd-resolved[1680]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:50:57.621494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:50:57.632482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:50:57.639831 systemd-resolved[1680]: Using system hostname 'ci-4081.3.0-a-b64d8040ed'. Jan 15 12:50:57.644523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:50:57.645348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:50:57.654129 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 12:50:57.663088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:50:57.663506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:50:57.672369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:50:57.672531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:50:57.680859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:50:57.681063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:50:57.692998 augenrules[1708]: No rules Jan 15 12:50:57.695415 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:57.703746 systemd[1]: Reached target network.target - Network. Jan 15 12:50:57.709308 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 12:50:57.715485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:50:57.722577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:50:57.727356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:50:57.736429 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:50:57.746458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:50:57.756135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:50:57.762810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:50:57.762887 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 12:50:57.769661 systemd[1]: Finished ensure-sysext.service. Jan 15 12:50:57.779514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:50:57.779672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:50:57.787493 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:50:57.787652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:50:57.795100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:50:57.795386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:50:57.803497 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:50:57.803702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:50:57.814406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:50:57.814508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:50:57.860912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 12:50:58.324952 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 12:50:58.332347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 12:51:01.215836 ldconfig[1469]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 12:51:01.225671 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 12:51:01.239409 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 12:51:01.254188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 12:51:01.262311 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:51:01.269519 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 12:51:01.277713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 12:51:01.286348 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 12:51:01.293343 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 12:51:01.301109 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 12:51:01.309457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 12:51:01.309493 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:51:01.315120 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:51:01.337714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 12:51:01.346716 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 12:51:01.371937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 12:51:01.378706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 12:51:01.386021 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:51:01.391744 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:51:01.397360 systemd[1]: System is tainted: cgroupsv1 Jan 15 12:51:01.397407 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:51:01.397429 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:51:01.410289 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 12:51:01.419348 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 12:51:01.432474 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 12:51:01.445748 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 12:51:01.454103 (chronyd)[1746]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 15 12:51:01.456350 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 12:51:01.465415 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 12:51:01.466538 jq[1751]: false Jan 15 12:51:01.473467 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 12:51:01.473540 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 15 12:51:01.478804 chronyd[1758]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 15 12:51:01.481047 chronyd[1758]: Timezone right/UTC failed leap second check, ignoring Jan 15 12:51:01.483407 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 12:51:01.481250 chronyd[1758]: Loaded seccomp filter (level 2) Jan 15 12:51:01.486803 KVP[1756]: KVP starting; pid is:1756 Jan 15 12:51:01.494627 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 12:51:01.505311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:01.515600 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 12:51:01.525416 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 12:51:01.537838 extend-filesystems[1754]: Found loop4 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found loop5 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found loop6 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found loop7 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda1 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda2 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda3 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found usr Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda4 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda6 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda7 Jan 15 12:51:01.543985 extend-filesystems[1754]: Found sda9 Jan 15 12:51:01.543985 extend-filesystems[1754]: Checking size of /dev/sda9 Jan 15 12:51:01.662397 kernel: hv_utils: KVP IC version 4.0 Jan 15 12:51:01.566986 KVP[1756]: KVP LIC Version: 3.1 Jan 15 12:51:01.557493 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 12:51:01.592344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 12:51:01.614595 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 12:51:01.639501 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 12:51:01.655231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 12:51:01.668354 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 12:51:01.678322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 12:51:01.688216 extend-filesystems[1754]: Old size kept for /dev/sda9 Jan 15 12:51:01.688216 extend-filesystems[1754]: Found sr0 Jan 15 12:51:01.687895 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 12:51:01.733648 dbus-daemon[1750]: [system] SELinux support is enabled Jan 15 12:51:01.706927 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 12:51:01.736311 jq[1790]: true Jan 15 12:51:01.707152 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 12:51:01.707423 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 12:51:01.707614 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 12:51:01.736072 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 12:51:01.754662 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 12:51:01.754897 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 12:51:01.764757 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 12:51:01.773154 update_engine[1787]: I20250115 12:51:01.772717 1787 main.cc:92] Flatcar Update Engine starting Jan 15 12:51:01.781353 update_engine[1787]: I20250115 12:51:01.780795 1787 update_check_scheduler.cc:74] Next update check in 10m38s Jan 15 12:51:01.781753 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 12:51:01.782006 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 12:51:01.813156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 12:51:01.813204 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 12:51:01.820667 systemd-logind[1784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 12:51:01.868379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1807) Jan 15 12:51:01.829998 systemd-logind[1784]: New seat seat0. Jan 15 12:51:01.832153 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 12:51:01.832464 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 12:51:01.883265 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 12:51:01.889927 coreos-metadata[1749]: Jan 15 12:51:01.889 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:51:01.913604 coreos-metadata[1749]: Jan 15 12:51:01.894 INFO Fetch successful Jan 15 12:51:01.913604 coreos-metadata[1749]: Jan 15 12:51:01.895 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 12:51:01.916567 systemd[1]: Started update-engine.service - Update Engine. Jan 15 12:51:01.923860 tar[1805]: linux-arm64/helm Jan 15 12:51:01.941564 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 12:51:01.941593 (ntainerd)[1810]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 12:51:01.950844 coreos-metadata[1749]: Jan 15 12:51:01.950 INFO Fetch successful Jan 15 12:51:01.951492 coreos-metadata[1749]: Jan 15 12:51:01.951 INFO Fetching http://168.63.129.16/machine/35f4c0a7-76a6-4a62-8207-2032052b015c/3eabd780%2D43e5%2D4541%2Da697%2D2053e4fd89e3.%5Fci%2D4081.3.0%2Da%2Db64d8040ed?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 12:51:01.953977 coreos-metadata[1749]: Jan 15 12:51:01.953 INFO Fetch successful Jan 15 12:51:01.954216 jq[1809]: true Jan 15 12:51:01.965516 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 12:51:01.978608 coreos-metadata[1749]: Jan 15 12:51:01.954 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:51:01.978608 coreos-metadata[1749]: Jan 15 12:51:01.972 INFO Fetch successful Jan 15 12:51:02.061307 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 12:51:02.069902 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 12:51:02.142700 bash[1878]: Updated "/home/core/.ssh/authorized_keys" Jan 15 12:51:02.144388 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 12:51:02.165866 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 12:51:02.284383 locksmithd[1854]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 12:51:02.548511 sshd_keygen[1792]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 12:51:02.581821 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 12:51:02.599564 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 12:51:02.617112 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 12:51:02.628675 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 12:51:02.628923 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 12:51:02.643270 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 12:51:02.682397 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 12:51:02.698991 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 12:51:02.721222 tar[1805]: linux-arm64/LICENSE Jan 15 12:51:02.721222 tar[1805]: linux-arm64/README.md Jan 15 12:51:02.722823 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 12:51:02.738842 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 12:51:02.745928 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 12:51:02.766948 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 12:51:02.784969 containerd[1810]: time="2025-01-15T12:51:02.784878100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 12:51:02.811337 containerd[1810]: time="2025-01-15T12:51:02.810992860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812351180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812394300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812415380Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812575940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812592220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812652220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812663220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812869580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812886180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812899460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813496 containerd[1810]: time="2025-01-15T12:51:02.812908420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813739 containerd[1810]: time="2025-01-15T12:51:02.812976460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.813739 containerd[1810]: time="2025-01-15T12:51:02.813167980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:51:02.814636 containerd[1810]: time="2025-01-15T12:51:02.814609180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:51:02.815358 containerd[1810]: time="2025-01-15T12:51:02.815336900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 12:51:02.815574 containerd[1810]: time="2025-01-15T12:51:02.815527860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 12:51:02.815702 containerd[1810]: time="2025-01-15T12:51:02.815682380Z" level=info msg="metadata content store policy set" policy=shared Jan 15 12:51:02.833237 containerd[1810]: time="2025-01-15T12:51:02.833167420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 12:51:02.833443 containerd[1810]: time="2025-01-15T12:51:02.833424380Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 12:51:02.833529 containerd[1810]: time="2025-01-15T12:51:02.833514860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 12:51:02.833585 containerd[1810]: time="2025-01-15T12:51:02.833573580Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 12:51:02.833671 containerd[1810]: time="2025-01-15T12:51:02.833656260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 12:51:02.833906 containerd[1810]: time="2025-01-15T12:51:02.833887580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 12:51:02.834888 containerd[1810]: time="2025-01-15T12:51:02.834829820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 12:51:02.835044 containerd[1810]: time="2025-01-15T12:51:02.835017660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 12:51:02.835087 containerd[1810]: time="2025-01-15T12:51:02.835042860Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 12:51:02.835087 containerd[1810]: time="2025-01-15T12:51:02.835067700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 12:51:02.835123 containerd[1810]: time="2025-01-15T12:51:02.835088220Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835123 containerd[1810]: time="2025-01-15T12:51:02.835105860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835156 containerd[1810]: time="2025-01-15T12:51:02.835126660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835156 containerd[1810]: time="2025-01-15T12:51:02.835145100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835267 containerd[1810]: time="2025-01-15T12:51:02.835166540Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835267 containerd[1810]: time="2025-01-15T12:51:02.835180540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835267 containerd[1810]: time="2025-01-15T12:51:02.835215980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835267 containerd[1810]: time="2025-01-15T12:51:02.835246100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835271540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835290860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835307940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835325500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835341220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835370 containerd[1810]: time="2025-01-15T12:51:02.835355900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835372300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835390860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835407620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835430820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835446300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835461780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835487 containerd[1810]: time="2025-01-15T12:51:02.835475140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835611 containerd[1810]: time="2025-01-15T12:51:02.835494780Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 12:51:02.835611 containerd[1810]: time="2025-01-15T12:51:02.835522540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835611 containerd[1810]: time="2025-01-15T12:51:02.835540180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835611 containerd[1810]: time="2025-01-15T12:51:02.835555100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 12:51:02.835611 containerd[1810]: time="2025-01-15T12:51:02.835609180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 12:51:02.835694 containerd[1810]: time="2025-01-15T12:51:02.835630660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 12:51:02.835694 containerd[1810]: time="2025-01-15T12:51:02.835664940Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 12:51:02.835694 containerd[1810]: time="2025-01-15T12:51:02.835682100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 12:51:02.835751 containerd[1810]: time="2025-01-15T12:51:02.835692500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.835751 containerd[1810]: time="2025-01-15T12:51:02.835709700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 12:51:02.835751 containerd[1810]: time="2025-01-15T12:51:02.835723540Z" level=info msg="NRI interface is disabled by configuration." Jan 15 12:51:02.835751 containerd[1810]: time="2025-01-15T12:51:02.835734300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 12:51:02.836882 containerd[1810]: time="2025-01-15T12:51:02.836055420Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 12:51:02.836882 containerd[1810]: time="2025-01-15T12:51:02.836126140Z" level=info msg="Connect containerd service" Jan 15 12:51:02.836882 containerd[1810]: time="2025-01-15T12:51:02.836167780Z" level=info msg="using legacy CRI server" Jan 15 12:51:02.836882 containerd[1810]: time="2025-01-15T12:51:02.836174620Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 12:51:02.836882 containerd[1810]: time="2025-01-15T12:51:02.836307540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 12:51:02.837605 containerd[1810]: time="2025-01-15T12:51:02.837575100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:51:02.837967 containerd[1810]: time="2025-01-15T12:51:02.837945420Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 12:51:02.838105 containerd[1810]: time="2025-01-15T12:51:02.838014740Z" level=info msg="Start subscribing containerd event" Jan 15 12:51:02.838141 containerd[1810]: time="2025-01-15T12:51:02.838110980Z" level=info msg="Start recovering state" Jan 15 12:51:02.838222 containerd[1810]: time="2025-01-15T12:51:02.838187100Z" level=info msg="Start event monitor" Jan 15 12:51:02.838263 containerd[1810]: time="2025-01-15T12:51:02.838222460Z" level=info msg="Start snapshots syncer" Jan 15 12:51:02.838263 containerd[1810]: time="2025-01-15T12:51:02.838233740Z" level=info msg="Start cni network conf syncer for default" Jan 15 12:51:02.838345 containerd[1810]: time="2025-01-15T12:51:02.838243260Z" level=info msg="Start streaming server" Jan 15 12:51:02.838415 containerd[1810]: time="2025-01-15T12:51:02.838081860Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 12:51:02.838525 containerd[1810]: time="2025-01-15T12:51:02.838510700Z" level=info msg="containerd successfully booted in 0.054534s" Jan 15 12:51:02.838627 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 12:51:02.919435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:02.927661 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:02.928379 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 12:51:02.936030 systemd[1]: Startup finished in 15.339s (kernel) + 12.590s (userspace) = 27.929s. Jan 15 12:51:03.206001 login[1919]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:03.206115 login[1921]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:03.217810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 12:51:03.224602 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 12:51:03.229311 systemd-logind[1784]: New session 2 of user core. Jan 15 12:51:03.236632 systemd-logind[1784]: New session 1 of user core. Jan 15 12:51:03.243819 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 12:51:03.253436 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 12:51:03.256912 (systemd)[1952]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 12:51:03.442363 systemd[1952]: Queued start job for default target default.target. Jan 15 12:51:03.442730 systemd[1952]: Created slice app.slice - User Application Slice. Jan 15 12:51:03.442748 systemd[1952]: Reached target paths.target - Paths. Jan 15 12:51:03.442758 systemd[1952]: Reached target timers.target - Timers. Jan 15 12:51:03.445411 kubelet[1939]: E0115 12:51:03.445332 1939 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:03.448571 systemd[1952]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 12:51:03.454324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:03.454478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:03.459446 systemd[1952]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 12:51:03.459504 systemd[1952]: Reached target sockets.target - Sockets. Jan 15 12:51:03.459516 systemd[1952]: Reached target basic.target - Basic System. Jan 15 12:51:03.459629 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 12:51:03.460278 systemd[1952]: Reached target default.target - Main User Target. Jan 15 12:51:03.460318 systemd[1952]: Startup finished in 195ms. Jan 15 12:51:03.467585 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 12:51:03.468338 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 12:51:04.541213 waagent[1916]: 2025-01-15T12:51:04.537573Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 15 12:51:04.543366 waagent[1916]: 2025-01-15T12:51:04.543298Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 15 12:51:04.548299 waagent[1916]: 2025-01-15T12:51:04.548240Z INFO Daemon Daemon Python: 3.11.9 Jan 15 12:51:04.554333 waagent[1916]: 2025-01-15T12:51:04.554262Z INFO Daemon Daemon Run daemon Jan 15 12:51:04.558594 waagent[1916]: 2025-01-15T12:51:04.558546Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 15 12:51:04.567740 waagent[1916]: 2025-01-15T12:51:04.567667Z INFO Daemon Daemon Using waagent for provisioning Jan 15 12:51:04.573291 waagent[1916]: 2025-01-15T12:51:04.573235Z INFO Daemon Daemon Activate resource disk Jan 15 12:51:04.577880 waagent[1916]: 2025-01-15T12:51:04.577829Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 12:51:04.589299 waagent[1916]: 2025-01-15T12:51:04.589235Z INFO Daemon Daemon Found device: None Jan 15 12:51:04.593664 waagent[1916]: 2025-01-15T12:51:04.593613Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 12:51:04.602345 waagent[1916]: 2025-01-15T12:51:04.602295Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 12:51:04.615114 waagent[1916]: 2025-01-15T12:51:04.615054Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:51:04.621427 waagent[1916]: 2025-01-15T12:51:04.621376Z INFO Daemon Daemon Running default provisioning handler Jan 15 12:51:04.633881 waagent[1916]: 2025-01-15T12:51:04.633803Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 12:51:04.648636 waagent[1916]: 2025-01-15T12:51:04.648568Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 12:51:04.658012 waagent[1916]: 2025-01-15T12:51:04.657952Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 12:51:04.662915 waagent[1916]: 2025-01-15T12:51:04.662865Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 12:51:04.857975 waagent[1916]: 2025-01-15T12:51:04.857830Z INFO Daemon Daemon Successfully mounted dvd Jan 15 12:51:04.873340 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 12:51:04.875479 waagent[1916]: 2025-01-15T12:51:04.873454Z INFO Daemon Daemon Detect protocol endpoint Jan 15 12:51:04.878919 waagent[1916]: 2025-01-15T12:51:04.878850Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:51:04.884884 waagent[1916]: 2025-01-15T12:51:04.884820Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 12:51:04.891807 waagent[1916]: 2025-01-15T12:51:04.891753Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 12:51:04.897585 waagent[1916]: 2025-01-15T12:51:04.897528Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 12:51:04.903196 waagent[1916]: 2025-01-15T12:51:04.903128Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 12:51:04.954730 waagent[1916]: 2025-01-15T12:51:04.954686Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 12:51:04.961409 waagent[1916]: 2025-01-15T12:51:04.961378Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 12:51:04.966917 waagent[1916]: 2025-01-15T12:51:04.966861Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 12:51:05.218225 waagent[1916]: 2025-01-15T12:51:05.217461Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 12:51:05.223977 waagent[1916]: 2025-01-15T12:51:05.223909Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 12:51:05.233062 waagent[1916]: 2025-01-15T12:51:05.233011Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:51:05.315135 waagent[1916]: 2025-01-15T12:51:05.315088Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 15 12:51:05.320952 waagent[1916]: 2025-01-15T12:51:05.320904Z INFO Daemon Jan 15 12:51:05.323719 waagent[1916]: 2025-01-15T12:51:05.323670Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 512b3a34-3e18-4314-a831-9544faee06c2 eTag: 2484770089297833319 source: Fabric] Jan 15 12:51:05.334997 waagent[1916]: 2025-01-15T12:51:05.334950Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 12:51:05.341902 waagent[1916]: 2025-01-15T12:51:05.341855Z INFO Daemon Jan 15 12:51:05.344677 waagent[1916]: 2025-01-15T12:51:05.344635Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:51:05.355898 waagent[1916]: 2025-01-15T12:51:05.355859Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 12:51:05.443666 waagent[1916]: 2025-01-15T12:51:05.443576Z INFO Daemon Downloaded certificate {'thumbprint': '65B9D80BE2E30880FEAF441BB996E4783200CAB6', 'hasPrivateKey': False} Jan 15 12:51:05.453793 waagent[1916]: 2025-01-15T12:51:05.453745Z INFO Daemon Downloaded certificate {'thumbprint': '20CCB15A95C4C8908E001A0B31A5071A80364EDF', 'hasPrivateKey': True} Jan 15 12:51:05.463531 waagent[1916]: 2025-01-15T12:51:05.463481Z INFO Daemon Fetch goal state completed Jan 15 12:51:05.478934 waagent[1916]: 2025-01-15T12:51:05.478852Z INFO Daemon Daemon Starting provisioning Jan 15 12:51:05.484251 waagent[1916]: 2025-01-15T12:51:05.484176Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 12:51:05.488834 waagent[1916]: 2025-01-15T12:51:05.488782Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-b64d8040ed] Jan 15 12:51:05.512222 waagent[1916]: 2025-01-15T12:51:05.511622Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-b64d8040ed] Jan 15 12:51:05.518327 waagent[1916]: 2025-01-15T12:51:05.518260Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 12:51:05.524370 waagent[1916]: 2025-01-15T12:51:05.524318Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 12:51:05.554888 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:51:05.554895 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:51:05.555890 waagent[1916]: 2025-01-15T12:51:05.555673Z INFO Daemon Daemon Create user account if not exists Jan 15 12:51:05.554922 systemd-networkd[1381]: eth0: DHCP lease lost Jan 15 12:51:05.561337 waagent[1916]: 2025-01-15T12:51:05.561270Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 12:51:05.566902 waagent[1916]: 2025-01-15T12:51:05.566847Z INFO Daemon Daemon Configure sudoer Jan 15 12:51:05.571310 systemd-networkd[1381]: eth0: DHCPv6 lease lost Jan 15 12:51:05.571839 waagent[1916]: 2025-01-15T12:51:05.571766Z INFO Daemon Daemon Configure sshd Jan 15 12:51:05.577669 waagent[1916]: 2025-01-15T12:51:05.577603Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 12:51:05.590504 waagent[1916]: 2025-01-15T12:51:05.590401Z INFO Daemon Daemon Deploy ssh public key. Jan 15 12:51:05.600261 systemd-networkd[1381]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:51:06.707211 waagent[1916]: 2025-01-15T12:51:06.706166Z INFO Daemon Daemon Provisioning complete Jan 15 12:51:06.725563 waagent[1916]: 2025-01-15T12:51:06.725512Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 12:51:06.731827 waagent[1916]: 2025-01-15T12:51:06.731764Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 12:51:06.741516 waagent[1916]: 2025-01-15T12:51:06.741465Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 15 12:51:06.870914 waagent[2016]: 2025-01-15T12:51:06.870839Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 15 12:51:06.871884 waagent[2016]: 2025-01-15T12:51:06.871396Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 15 12:51:06.871884 waagent[2016]: 2025-01-15T12:51:06.871467Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 15 12:51:06.916230 waagent[2016]: 2025-01-15T12:51:06.915771Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 15 12:51:06.916230 waagent[2016]: 2025-01-15T12:51:06.916002Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:51:06.916230 waagent[2016]: 2025-01-15T12:51:06.916062Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:51:06.924651 waagent[2016]: 2025-01-15T12:51:06.924588Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:51:06.930438 waagent[2016]: 2025-01-15T12:51:06.930393Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 15 12:51:06.930970 waagent[2016]: 2025-01-15T12:51:06.930926Z INFO ExtHandler Jan 15 12:51:06.931037 waagent[2016]: 2025-01-15T12:51:06.931008Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6ac3329f-bd04-4219-8b0d-7088054882c2 eTag: 2484770089297833319 source: Fabric] Jan 15 12:51:06.931354 waagent[2016]: 2025-01-15T12:51:06.931311Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 12:51:06.931903 waagent[2016]: 2025-01-15T12:51:06.931860Z INFO ExtHandler Jan 15 12:51:06.931965 waagent[2016]: 2025-01-15T12:51:06.931937Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:51:06.936547 waagent[2016]: 2025-01-15T12:51:06.936512Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 12:51:07.014515 waagent[2016]: 2025-01-15T12:51:07.014374Z INFO ExtHandler Downloaded certificate {'thumbprint': '65B9D80BE2E30880FEAF441BB996E4783200CAB6', 'hasPrivateKey': False} Jan 15 12:51:07.014889 waagent[2016]: 2025-01-15T12:51:07.014843Z INFO ExtHandler Downloaded certificate {'thumbprint': '20CCB15A95C4C8908E001A0B31A5071A80364EDF', 'hasPrivateKey': True} Jan 15 12:51:07.015319 waagent[2016]: 2025-01-15T12:51:07.015275Z INFO ExtHandler Fetch goal state completed Jan 15 12:51:07.033178 waagent[2016]: 2025-01-15T12:51:07.033120Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2016 Jan 15 12:51:07.033362 waagent[2016]: 2025-01-15T12:51:07.033325Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 12:51:07.035011 waagent[2016]: 2025-01-15T12:51:07.034965Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 12:51:07.035406 waagent[2016]: 2025-01-15T12:51:07.035369Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 12:51:07.055762 waagent[2016]: 2025-01-15T12:51:07.055717Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 12:51:07.055968 waagent[2016]: 2025-01-15T12:51:07.055925Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 12:51:07.062183 waagent[2016]: 2025-01-15T12:51:07.062135Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 12:51:07.069083 systemd[1]: Reloading requested from client PID 2031 ('systemctl') (unit waagent.service)... Jan 15 12:51:07.069364 systemd[1]: Reloading... Jan 15 12:51:07.148233 zram_generator::config[2074]: No configuration found. Jan 15 12:51:07.244406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:51:07.323133 systemd[1]: Reloading finished in 253 ms. Jan 15 12:51:07.342791 waagent[2016]: 2025-01-15T12:51:07.342343Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 15 12:51:07.349000 systemd[1]: Reloading requested from client PID 2124 ('systemctl') (unit waagent.service)... Jan 15 12:51:07.349017 systemd[1]: Reloading... Jan 15 12:51:07.432275 zram_generator::config[2161]: No configuration found. Jan 15 12:51:07.532047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:51:07.605095 systemd[1]: Reloading finished in 255 ms. Jan 15 12:51:07.624367 waagent[2016]: 2025-01-15T12:51:07.623265Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 12:51:07.624367 waagent[2016]: 2025-01-15T12:51:07.623429Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 12:51:08.084241 waagent[2016]: 2025-01-15T12:51:08.083957Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 12:51:08.084654 waagent[2016]: 2025-01-15T12:51:08.084596Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 15 12:51:08.085480 waagent[2016]: 2025-01-15T12:51:08.085425Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 12:51:08.085892 waagent[2016]: 2025-01-15T12:51:08.085792Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 12:51:08.086382 waagent[2016]: 2025-01-15T12:51:08.086272Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 12:51:08.086491 waagent[2016]: 2025-01-15T12:51:08.086379Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 12:51:08.087018 waagent[2016]: 2025-01-15T12:51:08.086915Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 12:51:08.087110 waagent[2016]: 2025-01-15T12:51:08.087014Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 12:51:08.087890 waagent[2016]: 2025-01-15T12:51:08.087833Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 12:51:08.088138 waagent[2016]: 2025-01-15T12:51:08.088035Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:51:08.088863 waagent[2016]: 2025-01-15T12:51:08.088108Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:51:08.089377 waagent[2016]: 2025-01-15T12:51:08.089314Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:51:08.089635 waagent[2016]: 2025-01-15T12:51:08.089582Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 12:51:08.090323 waagent[2016]: 2025-01-15T12:51:08.090262Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:51:08.090457 waagent[2016]: 2025-01-15T12:51:08.090391Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 12:51:08.090457 waagent[2016]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 12:51:08.090457 waagent[2016]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 12:51:08.090457 waagent[2016]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 12:51:08.090457 waagent[2016]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:51:08.090457 waagent[2016]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:51:08.090457 waagent[2016]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:51:08.091096 waagent[2016]: 2025-01-15T12:51:08.090960Z INFO EnvHandler ExtHandler Configure routes Jan 15 12:51:08.092469 waagent[2016]: 2025-01-15T12:51:08.092410Z INFO EnvHandler ExtHandler Gateway:None Jan 15 12:51:08.092961 waagent[2016]: 2025-01-15T12:51:08.092908Z INFO EnvHandler ExtHandler Routes:None Jan 15 12:51:08.094851 waagent[2016]: 2025-01-15T12:51:08.094788Z INFO ExtHandler ExtHandler Jan 15 12:51:08.094950 waagent[2016]: 2025-01-15T12:51:08.094908Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3e98823b-fcc5-498c-bbea-802b730d0160 correlation ce7c7fac-0385-41b0-a555-7d1f76a9444d created: 2025-01-15T12:49:47.185921Z] Jan 15 12:51:08.096730 waagent[2016]: 2025-01-15T12:51:08.096636Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 12:51:08.098956 waagent[2016]: 2025-01-15T12:51:08.098847Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jan 15 12:51:08.142810 waagent[2016]: 2025-01-15T12:51:08.142639Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 24672C3A-AD1E-4218-8322-D7D723F7A8C4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 15 12:51:08.147011 waagent[2016]: 2025-01-15T12:51:08.146911Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 12:51:08.147011 waagent[2016]: Executing ['ip', '-a', '-o', 'link']: Jan 15 12:51:08.147011 waagent[2016]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 12:51:08.147011 waagent[2016]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:a2:92 brd ff:ff:ff:ff:ff:ff Jan 15 12:51:08.147011 waagent[2016]: 3: enP37724s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:a2:92 brd ff:ff:ff:ff:ff:ff\ altname enP37724p0s2 Jan 15 12:51:08.147011 waagent[2016]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 12:51:08.147011 waagent[2016]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 12:51:08.147011 waagent[2016]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 12:51:08.147011 waagent[2016]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 12:51:08.147011 waagent[2016]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 12:51:08.147011 waagent[2016]: 2: eth0 inet6 fe80::20d:3aff:fef6:a292/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:51:08.147011 waagent[2016]: 3: enP37724s1 inet6 fe80::20d:3aff:fef6:a292/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:51:08.618338 waagent[2016]: 2025-01-15T12:51:08.618232Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 15 12:51:08.618338 waagent[2016]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.618338 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.618338 waagent[2016]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.618338 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.618338 waagent[2016]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.618338 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.618338 waagent[2016]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:51:08.618338 waagent[2016]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:51:08.618338 waagent[2016]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:51:08.621553 waagent[2016]: 2025-01-15T12:51:08.621472Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 12:51:08.621553 waagent[2016]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.621553 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.621553 waagent[2016]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.621553 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.621553 waagent[2016]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:51:08.621553 waagent[2016]: pkts bytes target prot opt in out source destination Jan 15 12:51:08.621553 waagent[2016]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:51:08.621553 waagent[2016]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:51:08.621553 waagent[2016]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:51:08.621839 waagent[2016]: 2025-01-15T12:51:08.621792Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 12:51:13.655087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 12:51:13.660570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:13.904387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:13.908124 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:13.954830 kubelet[2260]: E0115 12:51:13.954730 2260 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:13.958734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:13.961425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:24.155279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 12:51:24.160368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:24.411973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:24.423564 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:24.465013 kubelet[2281]: E0115 12:51:24.464953 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:24.467323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:24.467467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:25.272151 chronyd[1758]: Selected source PHC0 Jan 15 12:51:34.655250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 12:51:34.665385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:34.912397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:34.917380 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:34.960960 kubelet[2302]: E0115 12:51:34.960907 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:34.965376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:34.965537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:43.182419 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 12:51:45.155267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 12:51:45.163370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:45.411363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:45.415157 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:45.456671 kubelet[2323]: E0115 12:51:45.456589 2323 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:45.459481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:45.459664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:47.037790 update_engine[1787]: I20250115 12:51:47.037217 1787 update_attempter.cc:509] Updating boot flags... Jan 15 12:51:47.100276 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2344) Jan 15 12:51:47.181235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2348) Jan 15 12:51:54.149670 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 12:51:54.160453 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:37946.service - OpenSSH per-connection server daemon (10.200.16.10:37946). Jan 15 12:51:54.695824 sshd[2398]: Accepted publickey for core from 10.200.16.10 port 37946 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:54.697083 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:54.701109 systemd-logind[1784]: New session 3 of user core. Jan 15 12:51:54.710507 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 12:51:55.099714 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:37954.service - OpenSSH per-connection server daemon (10.200.16.10:37954). Jan 15 12:51:55.478652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 12:51:55.486748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:55.565067 sshd[2403]: Accepted publickey for core from 10.200.16.10 port 37954 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:55.565766 sshd[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:55.570727 systemd-logind[1784]: New session 4 of user core. Jan 15 12:51:55.578484 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 12:51:55.729395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:55.732494 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:51:55.774318 kubelet[2419]: E0115 12:51:55.774258 2419 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:51:55.776747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:51:55.776889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:55.897783 sshd[2403]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:55.900395 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:37954.service: Deactivated successfully. Jan 15 12:51:55.903518 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 12:51:55.904545 systemd-logind[1784]: Session 4 logged out. Waiting for processes to exit. Jan 15 12:51:55.905329 systemd-logind[1784]: Removed session 4. Jan 15 12:51:55.992433 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:39210.service - OpenSSH per-connection server daemon (10.200.16.10:39210). Jan 15 12:51:56.461717 sshd[2432]: Accepted publickey for core from 10.200.16.10 port 39210 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:56.463004 sshd[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:56.467668 systemd-logind[1784]: New session 5 of user core. Jan 15 12:51:56.475582 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 12:51:56.795425 sshd[2432]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:56.798394 systemd-logind[1784]: Session 5 logged out. Waiting for processes to exit. Jan 15 12:51:56.798561 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:39210.service: Deactivated successfully. Jan 15 12:51:56.801885 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 12:51:56.803416 systemd-logind[1784]: Removed session 5. Jan 15 12:51:56.873447 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:39222.service - OpenSSH per-connection server daemon (10.200.16.10:39222). Jan 15 12:51:57.323486 sshd[2440]: Accepted publickey for core from 10.200.16.10 port 39222 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:57.324837 sshd[2440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:57.329563 systemd-logind[1784]: New session 6 of user core. Jan 15 12:51:57.335631 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 12:51:57.647688 sshd[2440]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:57.650739 systemd-logind[1784]: Session 6 logged out. Waiting for processes to exit. Jan 15 12:51:57.652148 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:39222.service: Deactivated successfully. Jan 15 12:51:57.655718 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 12:51:57.657117 systemd-logind[1784]: Removed session 6. Jan 15 12:51:57.727442 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:39236.service - OpenSSH per-connection server daemon (10.200.16.10:39236). Jan 15 12:51:58.144099 sshd[2448]: Accepted publickey for core from 10.200.16.10 port 39236 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:58.145410 sshd[2448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:58.149146 systemd-logind[1784]: New session 7 of user core. Jan 15 12:51:58.156536 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 12:51:58.492601 sudo[2452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 12:51:58.492871 sudo[2452]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:51:58.504127 sudo[2452]: pam_unix(sudo:session): session closed for user root Jan 15 12:51:58.578493 sshd[2448]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:58.585024 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:39236.service: Deactivated successfully. Jan 15 12:51:58.585073 systemd-logind[1784]: Session 7 logged out. Waiting for processes to exit. Jan 15 12:51:58.587290 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 12:51:58.588369 systemd-logind[1784]: Removed session 7. Jan 15 12:51:58.653442 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:39244.service - OpenSSH per-connection server daemon (10.200.16.10:39244). Jan 15 12:51:59.064939 sshd[2457]: Accepted publickey for core from 10.200.16.10 port 39244 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:59.066320 sshd[2457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:59.069917 systemd-logind[1784]: New session 8 of user core. Jan 15 12:51:59.080544 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 12:51:59.304387 sudo[2462]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 12:51:59.304649 sudo[2462]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:51:59.308074 sudo[2462]: pam_unix(sudo:session): session closed for user root Jan 15 12:51:59.312769 sudo[2461]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 12:51:59.313035 sudo[2461]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:51:59.324431 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 12:51:59.326685 auditctl[2465]: No rules Jan 15 12:51:59.327010 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 12:51:59.327285 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 12:51:59.334578 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:51:59.353261 augenrules[2484]: No rules Jan 15 12:51:59.353806 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:51:59.356555 sudo[2461]: pam_unix(sudo:session): session closed for user root Jan 15 12:51:59.430425 sshd[2457]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:59.433585 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:39244.service: Deactivated successfully. Jan 15 12:51:59.436483 systemd-logind[1784]: Session 8 logged out. Waiting for processes to exit. Jan 15 12:51:59.436891 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 12:51:59.437913 systemd-logind[1784]: Removed session 8. Jan 15 12:51:59.521441 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:39256.service - OpenSSH per-connection server daemon (10.200.16.10:39256). Jan 15 12:51:59.985281 sshd[2493]: Accepted publickey for core from 10.200.16.10 port 39256 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:51:59.986581 sshd[2493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:51:59.990395 systemd-logind[1784]: New session 9 of user core. Jan 15 12:51:59.998539 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 12:52:00.251409 sudo[2497]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 12:52:00.251681 sudo[2497]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:52:01.207437 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 12:52:01.207676 (dockerd)[2512]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 12:52:01.760235 dockerd[2512]: time="2025-01-15T12:52:01.759686523Z" level=info msg="Starting up" Jan 15 12:52:02.103792 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport323383985-merged.mount: Deactivated successfully. Jan 15 12:52:02.228955 dockerd[2512]: time="2025-01-15T12:52:02.228875908Z" level=info msg="Loading containers: start." Jan 15 12:52:02.376303 kernel: Initializing XFRM netlink socket Jan 15 12:52:02.552416 systemd-networkd[1381]: docker0: Link UP Jan 15 12:52:02.570391 dockerd[2512]: time="2025-01-15T12:52:02.570352364Z" level=info msg="Loading containers: done." Jan 15 12:52:02.589505 dockerd[2512]: time="2025-01-15T12:52:02.589458957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 12:52:02.589702 dockerd[2512]: time="2025-01-15T12:52:02.589572396Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 12:52:02.589702 dockerd[2512]: time="2025-01-15T12:52:02.589682596Z" level=info msg="Daemon has completed initialization" Jan 15 12:52:02.635238 dockerd[2512]: time="2025-01-15T12:52:02.635053685Z" level=info msg="API listen on /run/docker.sock" Jan 15 12:52:02.636166 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 12:52:03.098349 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2621882403-merged.mount: Deactivated successfully. Jan 15 12:52:04.086069 containerd[1810]: time="2025-01-15T12:52:04.086022516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 15 12:52:04.916570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487459315.mount: Deactivated successfully. Jan 15 12:52:05.905132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 15 12:52:05.912409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:06.009403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:06.014005 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:52:06.054635 kubelet[2716]: E0115 12:52:06.054572 2716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:52:06.056763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:52:06.056902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:52:06.993214 containerd[1810]: time="2025-01-15T12:52:06.993120219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:07.014291 containerd[1810]: time="2025-01-15T12:52:07.014253007Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 15 12:52:07.028907 containerd[1810]: time="2025-01-15T12:52:07.028860411Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:07.033899 containerd[1810]: time="2025-01-15T12:52:07.033849279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:07.035517 containerd[1810]: time="2025-01-15T12:52:07.034842597Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.948772041s" Jan 15 12:52:07.035517 containerd[1810]: time="2025-01-15T12:52:07.034882117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 15 12:52:07.055108 containerd[1810]: time="2025-01-15T12:52:07.055070627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 15 12:52:09.838928 containerd[1810]: time="2025-01-15T12:52:09.838873601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:09.843729 containerd[1810]: time="2025-01-15T12:52:09.843693109Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 15 12:52:09.847745 containerd[1810]: time="2025-01-15T12:52:09.847700139Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:09.853845 containerd[1810]: time="2025-01-15T12:52:09.853786364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:09.855043 containerd[1810]: time="2025-01-15T12:52:09.855006321Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.799894254s" Jan 15 12:52:09.855043 containerd[1810]: time="2025-01-15T12:52:09.855042641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 15 12:52:09.873594 containerd[1810]: time="2025-01-15T12:52:09.873510476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 15 12:52:11.322252 containerd[1810]: time="2025-01-15T12:52:11.321759045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:11.324304 containerd[1810]: time="2025-01-15T12:52:11.324242679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 15 12:52:11.328699 containerd[1810]: time="2025-01-15T12:52:11.328647468Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:11.335184 containerd[1810]: time="2025-01-15T12:52:11.335085972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:11.336233 containerd[1810]: time="2025-01-15T12:52:11.336174969Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.462623693s" Jan 15 12:52:11.336428 containerd[1810]: time="2025-01-15T12:52:11.336325889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 15 12:52:11.357728 containerd[1810]: time="2025-01-15T12:52:11.357628357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 15 12:52:12.340904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267484502.mount: Deactivated successfully. Jan 15 12:52:15.678070 containerd[1810]: time="2025-01-15T12:52:15.677417716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:15.685206 containerd[1810]: time="2025-01-15T12:52:15.685157217Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 15 12:52:15.688514 containerd[1810]: time="2025-01-15T12:52:15.688466649Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:15.693639 containerd[1810]: time="2025-01-15T12:52:15.693565036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:15.694511 containerd[1810]: time="2025-01-15T12:52:15.694106435Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 4.336436718s" Jan 15 12:52:15.694511 containerd[1810]: time="2025-01-15T12:52:15.694144475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 15 12:52:15.712701 containerd[1810]: time="2025-01-15T12:52:15.712491470Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 12:52:16.155097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 15 12:52:16.164394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:16.265389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:16.269754 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:52:16.309525 kubelet[2776]: E0115 12:52:16.309447 2776 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:52:16.311653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:52:16.311804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:52:25.478235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339484602.mount: Deactivated successfully. Jan 15 12:52:26.313650 containerd[1810]: time="2025-01-15T12:52:26.313597604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:26.316552 containerd[1810]: time="2025-01-15T12:52:26.316510676Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 15 12:52:26.322185 containerd[1810]: time="2025-01-15T12:52:26.322137222Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:26.328164 containerd[1810]: time="2025-01-15T12:52:26.328086168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:26.329546 containerd[1810]: time="2025-01-15T12:52:26.329129325Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 10.616600576s" Jan 15 12:52:26.329546 containerd[1810]: time="2025-01-15T12:52:26.329168125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 15 12:52:26.349916 containerd[1810]: time="2025-01-15T12:52:26.349876593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 12:52:26.405018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 15 12:52:26.411454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:26.504403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:26.517560 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:52:26.559806 kubelet[2847]: E0115 12:52:26.559720 2847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:52:26.562059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:52:26.562226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:52:27.342510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228299620.mount: Deactivated successfully. Jan 15 12:52:27.364255 containerd[1810]: time="2025-01-15T12:52:27.363589796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:27.366976 containerd[1810]: time="2025-01-15T12:52:27.366941228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 15 12:52:27.370840 containerd[1810]: time="2025-01-15T12:52:27.370806538Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:27.376340 containerd[1810]: time="2025-01-15T12:52:27.376280484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:27.377425 containerd[1810]: time="2025-01-15T12:52:27.376964963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.02689365s" Jan 15 12:52:27.377425 containerd[1810]: time="2025-01-15T12:52:27.376998763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 15 12:52:27.396828 containerd[1810]: time="2025-01-15T12:52:27.396731114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 15 12:52:28.135285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194639806.mount: Deactivated successfully. Jan 15 12:52:30.944105 containerd[1810]: time="2025-01-15T12:52:30.944047242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:30.947563 containerd[1810]: time="2025-01-15T12:52:30.947469754Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 15 12:52:30.952541 containerd[1810]: time="2025-01-15T12:52:30.952474382Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:30.958677 containerd[1810]: time="2025-01-15T12:52:30.958602847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:30.960048 containerd[1810]: time="2025-01-15T12:52:30.959901323Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.563134009s" Jan 15 12:52:30.960048 containerd[1810]: time="2025-01-15T12:52:30.959941083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 15 12:52:36.064688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:36.073671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:36.089148 systemd[1]: Reloading requested from client PID 2975 ('systemctl') (unit session-9.scope)... Jan 15 12:52:36.089174 systemd[1]: Reloading... Jan 15 12:52:36.189382 zram_generator::config[3015]: No configuration found. Jan 15 12:52:36.300726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:52:36.373898 systemd[1]: Reloading finished in 284 ms. Jan 15 12:52:36.418878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 12:52:36.419109 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 12:52:36.419577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:36.428629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:36.539340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:36.539600 (kubelet)[3094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:52:36.581006 kubelet[3094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:52:36.582382 kubelet[3094]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:52:36.582382 kubelet[3094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:52:36.582382 kubelet[3094]: I0115 12:52:36.581304 3094 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:52:37.533033 kubelet[3094]: I0115 12:52:37.532998 3094 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 12:52:37.533033 kubelet[3094]: I0115 12:52:37.533027 3094 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:52:37.533285 kubelet[3094]: I0115 12:52:37.533262 3094 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 12:52:37.548547 kubelet[3094]: E0115 12:52:37.548509 3094 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.548854 kubelet[3094]: I0115 12:52:37.548671 3094 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:52:37.558074 kubelet[3094]: I0115 12:52:37.558023 3094 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:52:37.560295 kubelet[3094]: I0115 12:52:37.559746 3094 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:52:37.560295 kubelet[3094]: I0115 12:52:37.560149 3094 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:52:37.560295 kubelet[3094]: I0115 12:52:37.560180 3094 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:52:37.560295 kubelet[3094]: I0115 12:52:37.560225 3094 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:52:37.561467 kubelet[3094]: I0115 12:52:37.561445 3094 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:52:37.563657 kubelet[3094]: I0115 12:52:37.563636 3094 kubelet.go:396] "Attempting to sync node with API server" Jan 15 12:52:37.563688 kubelet[3094]: I0115 12:52:37.563665 3094 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:52:37.563881 kubelet[3094]: I0115 12:52:37.563863 3094 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:52:37.563906 kubelet[3094]: I0115 12:52:37.563884 3094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:52:37.564820 kubelet[3094]: W0115 12:52:37.564764 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b64d8040ed&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.564820 kubelet[3094]: E0115 12:52:37.564821 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b64d8040ed&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.567819 kubelet[3094]: I0115 12:52:37.567797 3094 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:52:37.568227 kubelet[3094]: I0115 12:52:37.568079 3094 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:52:37.568504 kubelet[3094]: W0115 12:52:37.568478 3094 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 12:52:37.568993 kubelet[3094]: I0115 12:52:37.568966 3094 server.go:1256] "Started kubelet" Jan 15 12:52:37.569382 kubelet[3094]: W0115 12:52:37.569065 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.569382 kubelet[3094]: E0115 12:52:37.569107 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.573629 kubelet[3094]: I0115 12:52:37.573597 3094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:52:37.574556 kubelet[3094]: E0115 12:52:37.574308 3094 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b64d8040ed.181adec82916fddf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b64d8040ed,UID:ci-4081.3.0-a-b64d8040ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b64d8040ed,},FirstTimestamp:2025-01-15 12:52:37.568945631 +0000 UTC m=+1.025191411,LastTimestamp:2025-01-15 12:52:37.568945631 +0000 UTC m=+1.025191411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b64d8040ed,}" Jan 15 12:52:37.577366 kubelet[3094]: I0115 12:52:37.577110 3094 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:52:37.577979 kubelet[3094]: I0115 12:52:37.577941 3094 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:52:37.578370 kubelet[3094]: I0115 12:52:37.578351 3094 server.go:461] "Adding debug handlers to kubelet server" Jan 15 12:52:37.579420 kubelet[3094]: I0115 12:52:37.579400 3094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:52:37.579805 kubelet[3094]: I0115 12:52:37.579685 3094 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:52:37.580433 kubelet[3094]: I0115 12:52:37.580137 3094 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 12:52:37.580433 kubelet[3094]: I0115 12:52:37.580219 3094 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 12:52:37.581319 kubelet[3094]: E0115 12:52:37.581301 3094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b64d8040ed?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Jan 15 12:52:37.582075 kubelet[3094]: E0115 12:52:37.581687 3094 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:52:37.582075 kubelet[3094]: I0115 12:52:37.581855 3094 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:52:37.582075 kubelet[3094]: I0115 12:52:37.581942 3094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:52:37.583932 kubelet[3094]: W0115 12:52:37.583639 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.583932 kubelet[3094]: E0115 12:52:37.583683 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.584227 kubelet[3094]: I0115 12:52:37.584208 3094 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:52:37.631465 kubelet[3094]: I0115 12:52:37.631416 3094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:52:37.632618 kubelet[3094]: I0115 12:52:37.632552 3094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:52:37.632618 kubelet[3094]: I0115 12:52:37.632610 3094 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:52:37.632722 kubelet[3094]: I0115 12:52:37.632631 3094 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 12:52:37.632722 kubelet[3094]: E0115 12:52:37.632691 3094 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:52:37.633818 kubelet[3094]: W0115 12:52:37.633726 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.633818 kubelet[3094]: E0115 12:52:37.633766 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:37.715087 kubelet[3094]: I0115 12:52:37.715054 3094 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.715662 kubelet[3094]: E0115 12:52:37.715465 3094 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.716509 kubelet[3094]: I0115 12:52:37.716255 3094 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:52:37.716509 kubelet[3094]: I0115 12:52:37.716276 3094 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:52:37.716509 kubelet[3094]: I0115 12:52:37.716300 3094 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:52:37.722164 kubelet[3094]: I0115 12:52:37.722136 3094 policy_none.go:49] "None policy: Start" Jan 15 12:52:37.722919 kubelet[3094]: I0115 12:52:37.722906 3094 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:52:37.723148 kubelet[3094]: I0115 12:52:37.723079 3094 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:52:37.731914 kubelet[3094]: I0115 12:52:37.731128 3094 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:52:37.731914 kubelet[3094]: I0115 12:52:37.731393 3094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:52:37.732871 kubelet[3094]: I0115 12:52:37.732839 3094 topology_manager.go:215] "Topology Admit Handler" podUID="ddde06b96581506f58b275450ffcab53" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.734413 kubelet[3094]: I0115 12:52:37.734390 3094 topology_manager.go:215] "Topology Admit Handler" podUID="f1a2989405cdbe2e363922e312b0b9cb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.736262 kubelet[3094]: E0115 12:52:37.736235 3094 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-b64d8040ed\" not found" Jan 15 12:52:37.736392 kubelet[3094]: I0115 12:52:37.736371 3094 topology_manager.go:215] "Topology Admit Handler" podUID="97634fc2d123bae4a0e337b22e4a1263" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.782397 kubelet[3094]: E0115 12:52:37.782364 3094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b64d8040ed?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Jan 15 12:52:37.882404 kubelet[3094]: I0115 12:52:37.881646 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882404 kubelet[3094]: I0115 12:52:37.881682 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882404 kubelet[3094]: I0115 12:52:37.881705 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882404 kubelet[3094]: I0115 12:52:37.881728 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882404 kubelet[3094]: I0115 12:52:37.881751 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882576 kubelet[3094]: I0115 12:52:37.881770 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97634fc2d123bae4a0e337b22e4a1263-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b64d8040ed\" (UID: \"97634fc2d123bae4a0e337b22e4a1263\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882576 kubelet[3094]: I0115 12:52:37.881788 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882576 kubelet[3094]: I0115 12:52:37.881809 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.882576 kubelet[3094]: I0115 12:52:37.881830 3094 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.917581 kubelet[3094]: I0115 12:52:37.917272 3094 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:37.917711 kubelet[3094]: E0115 12:52:37.917614 3094 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:38.040168 containerd[1810]: time="2025-01-15T12:52:38.040122625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b64d8040ed,Uid:ddde06b96581506f58b275450ffcab53,Namespace:kube-system,Attempt:0,}" Jan 15 12:52:38.044361 containerd[1810]: time="2025-01-15T12:52:38.044258136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b64d8040ed,Uid:f1a2989405cdbe2e363922e312b0b9cb,Namespace:kube-system,Attempt:0,}" Jan 15 12:52:38.044473 containerd[1810]: time="2025-01-15T12:52:38.044274216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b64d8040ed,Uid:97634fc2d123bae4a0e337b22e4a1263,Namespace:kube-system,Attempt:0,}" Jan 15 12:52:38.183003 kubelet[3094]: E0115 12:52:38.182967 3094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b64d8040ed?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Jan 15 12:52:38.320210 kubelet[3094]: I0115 12:52:38.320103 3094 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:38.320457 kubelet[3094]: E0115 12:52:38.320429 3094 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:38.389361 kubelet[3094]: W0115 12:52:38.389306 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.389361 kubelet[3094]: E0115 12:52:38.389365 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.632124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472062859.mount: Deactivated successfully. Jan 15 12:52:38.662239 containerd[1810]: time="2025-01-15T12:52:38.661604590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:52:38.663617 containerd[1810]: time="2025-01-15T12:52:38.663578666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 15 12:52:38.667200 containerd[1810]: time="2025-01-15T12:52:38.667154619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:52:38.670903 containerd[1810]: time="2025-01-15T12:52:38.670153293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:52:38.674994 containerd[1810]: time="2025-01-15T12:52:38.674956843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:52:38.678065 containerd[1810]: time="2025-01-15T12:52:38.677722237Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:52:38.680266 containerd[1810]: time="2025-01-15T12:52:38.680227912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:52:38.684953 containerd[1810]: time="2025-01-15T12:52:38.684885302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:52:38.685977 containerd[1810]: time="2025-01-15T12:52:38.685733101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 645.504036ms" Jan 15 12:52:38.687406 containerd[1810]: time="2025-01-15T12:52:38.687374577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 643.021441ms" Jan 15 12:52:38.690722 containerd[1810]: time="2025-01-15T12:52:38.690680771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.220275ms" Jan 15 12:52:38.717114 kubelet[3094]: W0115 12:52:38.716947 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b64d8040ed&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.717114 kubelet[3094]: E0115 12:52:38.717033 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b64d8040ed&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.862094 kubelet[3094]: W0115 12:52:38.862035 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.862094 kubelet[3094]: E0115 12:52:38.862074 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.893882 kubelet[3094]: W0115 12:52:38.893734 3094 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.893882 kubelet[3094]: E0115 12:52:38.893793 3094 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:38.983528 kubelet[3094]: E0115 12:52:38.983494 3094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b64d8040ed?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Jan 15 12:52:39.123789 kubelet[3094]: I0115 12:52:39.123759 3094 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:39.124158 kubelet[3094]: E0115 12:52:39.124137 3094 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:39.397700 containerd[1810]: time="2025-01-15T12:52:39.397605001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:52:39.398733 containerd[1810]: time="2025-01-15T12:52:39.398669759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:52:39.399544 containerd[1810]: time="2025-01-15T12:52:39.399498357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:52:39.400295 containerd[1810]: time="2025-01-15T12:52:39.399684156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.400701 containerd[1810]: time="2025-01-15T12:52:39.400666194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:52:39.400958 containerd[1810]: time="2025-01-15T12:52:39.400893794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.401800 containerd[1810]: time="2025-01-15T12:52:39.401704552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.401800 containerd[1810]: time="2025-01-15T12:52:39.401645392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.408228 containerd[1810]: time="2025-01-15T12:52:39.406127663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:52:39.408228 containerd[1810]: time="2025-01-15T12:52:39.406174623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:52:39.408228 containerd[1810]: time="2025-01-15T12:52:39.406185503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.408228 containerd[1810]: time="2025-01-15T12:52:39.406279143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:39.484830 containerd[1810]: time="2025-01-15T12:52:39.484779742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b64d8040ed,Uid:97634fc2d123bae4a0e337b22e4a1263,Namespace:kube-system,Attempt:0,} returns sandbox id \"a508e9d70ddb0cb264133b6e6c2d0bfd3916953720d549e4d856fada84790d37\"" Jan 15 12:52:39.487749 containerd[1810]: time="2025-01-15T12:52:39.487501216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b64d8040ed,Uid:f1a2989405cdbe2e363922e312b0b9cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"091128b61f31f2eaa031e58b27ba2eb6b51c4e019d8d005e9be2505308f9a280\"" Jan 15 12:52:39.493109 containerd[1810]: time="2025-01-15T12:52:39.492977365Z" level=info msg="CreateContainer within sandbox \"a508e9d70ddb0cb264133b6e6c2d0bfd3916953720d549e4d856fada84790d37\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 12:52:39.493109 containerd[1810]: time="2025-01-15T12:52:39.493012205Z" level=info msg="CreateContainer within sandbox \"091128b61f31f2eaa031e58b27ba2eb6b51c4e019d8d005e9be2505308f9a280\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 12:52:39.493538 containerd[1810]: time="2025-01-15T12:52:39.493384924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b64d8040ed,Uid:ddde06b96581506f58b275450ffcab53,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ead1b501ff41b6a4fd8a94bdaf44cec2b98d916db0f511e50bf6ffb8b237d4\"" Jan 15 12:52:39.497954 containerd[1810]: time="2025-01-15T12:52:39.497823515Z" level=info msg="CreateContainer within sandbox \"56ead1b501ff41b6a4fd8a94bdaf44cec2b98d916db0f511e50bf6ffb8b237d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 12:52:39.581771 containerd[1810]: time="2025-01-15T12:52:39.581721423Z" level=info msg="CreateContainer within sandbox \"a508e9d70ddb0cb264133b6e6c2d0bfd3916953720d549e4d856fada84790d37\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"957be8b78e84789c9007973da2e865169e52309048dc4693603b853f1d25e065\"" Jan 15 12:52:39.582415 containerd[1810]: time="2025-01-15T12:52:39.582388662Z" level=info msg="StartContainer for \"957be8b78e84789c9007973da2e865169e52309048dc4693603b853f1d25e065\"" Jan 15 12:52:39.589733 containerd[1810]: time="2025-01-15T12:52:39.589687447Z" level=info msg="CreateContainer within sandbox \"091128b61f31f2eaa031e58b27ba2eb6b51c4e019d8d005e9be2505308f9a280\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f55a6cb648e375e5c64676c1a15b0eaf35b1d4cb4edbdf6ddb43d6f1d82ed29\"" Jan 15 12:52:39.590838 containerd[1810]: time="2025-01-15T12:52:39.590716205Z" level=info msg="StartContainer for \"3f55a6cb648e375e5c64676c1a15b0eaf35b1d4cb4edbdf6ddb43d6f1d82ed29\"" Jan 15 12:52:39.606143 containerd[1810]: time="2025-01-15T12:52:39.605370495Z" level=info msg="CreateContainer within sandbox \"56ead1b501ff41b6a4fd8a94bdaf44cec2b98d916db0f511e50bf6ffb8b237d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c666ac4df528dd659be859cb8acd84e55f21111f6b8dc73a7a21a419c7bdea8\"" Jan 15 12:52:39.609335 containerd[1810]: time="2025-01-15T12:52:39.606850012Z" level=info msg="StartContainer for \"3c666ac4df528dd659be859cb8acd84e55f21111f6b8dc73a7a21a419c7bdea8\"" Jan 15 12:52:39.615859 kubelet[3094]: E0115 12:52:39.615307 3094 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Jan 15 12:52:39.697601 containerd[1810]: time="2025-01-15T12:52:39.697022347Z" level=info msg="StartContainer for \"957be8b78e84789c9007973da2e865169e52309048dc4693603b853f1d25e065\" returns successfully" Jan 15 12:52:39.700522 containerd[1810]: time="2025-01-15T12:52:39.700378620Z" level=info msg="StartContainer for \"3f55a6cb648e375e5c64676c1a15b0eaf35b1d4cb4edbdf6ddb43d6f1d82ed29\" returns successfully" Jan 15 12:52:39.733456 containerd[1810]: time="2025-01-15T12:52:39.733411312Z" level=info msg="StartContainer for \"3c666ac4df528dd659be859cb8acd84e55f21111f6b8dc73a7a21a419c7bdea8\" returns successfully" Jan 15 12:52:40.727655 kubelet[3094]: I0115 12:52:40.727622 3094 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:42.448214 kubelet[3094]: I0115 12:52:42.447429 3094 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:42.533751 kubelet[3094]: E0115 12:52:42.533327 3094 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-b64d8040ed.181adec82916fddf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b64d8040ed,UID:ci-4081.3.0-a-b64d8040ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b64d8040ed,},FirstTimestamp:2025-01-15 12:52:37.568945631 +0000 UTC m=+1.025191411,LastTimestamp:2025-01-15 12:52:37.568945631 +0000 UTC m=+1.025191411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b64d8040ed,}" Jan 15 12:52:42.569470 kubelet[3094]: I0115 12:52:42.569426 3094 apiserver.go:52] "Watching apiserver" Jan 15 12:52:42.580998 kubelet[3094]: I0115 12:52:42.580935 3094 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 12:52:42.664275 kubelet[3094]: E0115 12:52:42.663948 3094 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-b64d8040ed.181adec829d92445 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b64d8040ed,UID:ci-4081.3.0-a-b64d8040ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b64d8040ed,},FirstTimestamp:2025-01-15 12:52:37.581669445 +0000 UTC m=+1.037915225,LastTimestamp:2025-01-15 12:52:37.581669445 +0000 UTC m=+1.037915225,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b64d8040ed,}" Jan 15 12:52:42.666831 kubelet[3094]: E0115 12:52:42.666720 3094 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 15 12:52:42.678672 kubelet[3094]: E0115 12:52:42.678477 3094 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.207627 systemd[1]: Reloading requested from client PID 3365 ('systemctl') (unit session-9.scope)... Jan 15 12:52:45.207643 systemd[1]: Reloading... Jan 15 12:52:45.287367 zram_generator::config[3406]: No configuration found. Jan 15 12:52:45.399049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:52:45.471367 kubelet[3094]: W0115 12:52:45.470324 3094 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:52:45.477950 systemd[1]: Reloading finished in 270 ms. Jan 15 12:52:45.507966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:45.508646 kubelet[3094]: I0115 12:52:45.508006 3094 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:52:45.521246 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 12:52:45.521577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:45.529781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:52:45.739901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:52:45.749688 (kubelet)[3479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:52:45.807416 kubelet[3479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:52:45.807416 kubelet[3479]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:52:45.807416 kubelet[3479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:52:45.809113 kubelet[3479]: I0115 12:52:45.807477 3479 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:52:45.813069 kubelet[3479]: I0115 12:52:45.813029 3479 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 12:52:45.813069 kubelet[3479]: I0115 12:52:45.813060 3479 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:52:45.813285 kubelet[3479]: I0115 12:52:45.813266 3479 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 12:52:45.817362 kubelet[3479]: I0115 12:52:45.817328 3479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 12:52:45.820911 kubelet[3479]: I0115 12:52:45.820873 3479 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:52:45.833615 kubelet[3479]: I0115 12:52:45.833556 3479 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834266 3479 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834453 3479 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834472 3479 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834480 3479 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834512 3479 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:52:45.834714 kubelet[3479]: I0115 12:52:45.834619 3479 kubelet.go:396] "Attempting to sync node with API server" Jan 15 12:52:45.834952 kubelet[3479]: I0115 12:52:45.834632 3479 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:52:45.834952 kubelet[3479]: I0115 12:52:45.834654 3479 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:52:45.834952 kubelet[3479]: I0115 12:52:45.834674 3479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:52:45.837088 kubelet[3479]: I0115 12:52:45.837068 3479 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:52:45.837689 kubelet[3479]: I0115 12:52:45.837675 3479 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:52:45.838316 kubelet[3479]: I0115 12:52:45.838294 3479 server.go:1256] "Started kubelet" Jan 15 12:52:45.841817 kubelet[3479]: I0115 12:52:45.841684 3479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:52:45.852277 kubelet[3479]: I0115 12:52:45.851672 3479 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:52:45.852671 kubelet[3479]: I0115 12:52:45.852656 3479 server.go:461] "Adding debug handlers to kubelet server" Jan 15 12:52:45.853844 kubelet[3479]: I0115 12:52:45.853826 3479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:52:45.854099 kubelet[3479]: I0115 12:52:45.854087 3479 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:52:45.856246 kubelet[3479]: I0115 12:52:45.856079 3479 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:52:45.858795 kubelet[3479]: I0115 12:52:45.858777 3479 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 12:52:45.859000 kubelet[3479]: I0115 12:52:45.858990 3479 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 12:52:45.860801 kubelet[3479]: I0115 12:52:45.860783 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:52:45.861922 kubelet[3479]: I0115 12:52:45.861906 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:52:45.862025 kubelet[3479]: I0115 12:52:45.862017 3479 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:52:45.862093 kubelet[3479]: I0115 12:52:45.862085 3479 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 12:52:45.862391 kubelet[3479]: E0115 12:52:45.862379 3479 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:52:45.872736 kubelet[3479]: I0115 12:52:45.871644 3479 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:52:45.872736 kubelet[3479]: I0115 12:52:45.871739 3479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:52:45.876841 kubelet[3479]: E0115 12:52:45.876772 3479 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:52:45.878231 kubelet[3479]: I0115 12:52:45.878214 3479 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:52:45.936599 kubelet[3479]: I0115 12:52:45.936564 3479 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:52:45.936599 kubelet[3479]: I0115 12:52:45.936591 3479 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:52:45.936599 kubelet[3479]: I0115 12:52:45.936613 3479 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:52:45.936785 kubelet[3479]: I0115 12:52:45.936767 3479 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 12:52:45.936816 kubelet[3479]: I0115 12:52:45.936792 3479 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 12:52:45.936816 kubelet[3479]: I0115 12:52:45.936800 3479 policy_none.go:49] "None policy: Start" Jan 15 12:52:45.937550 kubelet[3479]: I0115 12:52:45.937528 3479 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:52:45.937550 kubelet[3479]: I0115 12:52:45.937556 3479 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:52:45.937734 kubelet[3479]: I0115 12:52:45.937716 3479 state_mem.go:75] "Updated machine memory state" Jan 15 12:52:45.938877 kubelet[3479]: I0115 12:52:45.938854 3479 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:52:45.940060 kubelet[3479]: I0115 12:52:45.939161 3479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:52:45.960755 kubelet[3479]: I0115 12:52:45.960729 3479 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.962630 kubelet[3479]: I0115 12:52:45.962602 3479 topology_manager.go:215] "Topology Admit Handler" podUID="ddde06b96581506f58b275450ffcab53" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.962749 kubelet[3479]: I0115 12:52:45.962708 3479 topology_manager.go:215] "Topology Admit Handler" podUID="f1a2989405cdbe2e363922e312b0b9cb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.963283 kubelet[3479]: I0115 12:52:45.962779 3479 topology_manager.go:215] "Topology Admit Handler" podUID="97634fc2d123bae4a0e337b22e4a1263" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.981595 kubelet[3479]: W0115 12:52:45.980734 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:52:45.986074 kubelet[3479]: W0115 12:52:45.986033 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:52:45.986894 kubelet[3479]: E0115 12:52:45.986115 3479 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-b64d8040ed\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.986894 kubelet[3479]: W0115 12:52:45.986403 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:52:45.986894 kubelet[3479]: I0115 12:52:45.986632 3479 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:45.986894 kubelet[3479]: I0115 12:52:45.986708 3479 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.159817 kubelet[3479]: I0115 12:52:46.159677 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.159817 kubelet[3479]: I0115 12:52:46.159728 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.159817 kubelet[3479]: I0115 12:52:46.159752 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.159817 kubelet[3479]: I0115 12:52:46.159774 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.159817 kubelet[3479]: I0115 12:52:46.159800 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.160567 kubelet[3479]: I0115 12:52:46.159826 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97634fc2d123bae4a0e337b22e4a1263-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b64d8040ed\" (UID: \"97634fc2d123bae4a0e337b22e4a1263\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.160567 kubelet[3479]: I0115 12:52:46.159844 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddde06b96581506f58b275450ffcab53-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" (UID: \"ddde06b96581506f58b275450ffcab53\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.160567 kubelet[3479]: I0115 12:52:46.159866 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.160567 kubelet[3479]: I0115 12:52:46.159885 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a2989405cdbe2e363922e312b0b9cb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b64d8040ed\" (UID: \"f1a2989405cdbe2e363922e312b0b9cb\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.837089 kubelet[3479]: I0115 12:52:46.836212 3479 apiserver.go:52] "Watching apiserver" Jan 15 12:52:46.859363 kubelet[3479]: I0115 12:52:46.859296 3479 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 12:52:46.935184 kubelet[3479]: W0115 12:52:46.935135 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:52:46.935307 kubelet[3479]: E0115 12:52:46.935215 3479 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-b64d8040ed\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" Jan 15 12:52:46.965396 kubelet[3479]: I0115 12:52:46.965353 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b64d8040ed" podStartSLOduration=1.965309844 podStartE2EDuration="1.965309844s" podCreationTimestamp="2025-01-15 12:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:52:46.948447638 +0000 UTC m=+1.194926383" watchObservedRunningTime="2025-01-15 12:52:46.965309844 +0000 UTC m=+1.211788589" Jan 15 12:52:46.982726 kubelet[3479]: I0115 12:52:46.982673 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b64d8040ed" podStartSLOduration=1.982634089 podStartE2EDuration="1.982634089s" podCreationTimestamp="2025-01-15 12:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:52:46.965561163 +0000 UTC m=+1.212039908" watchObservedRunningTime="2025-01-15 12:52:46.982634089 +0000 UTC m=+1.229112834" Jan 15 12:52:46.998026 kubelet[3479]: I0115 12:52:46.997443 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b64d8040ed" podStartSLOduration=1.99694178 podStartE2EDuration="1.99694178s" podCreationTimestamp="2025-01-15 12:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:52:46.983223487 +0000 UTC m=+1.229702232" watchObservedRunningTime="2025-01-15 12:52:46.99694178 +0000 UTC m=+1.243420565" Jan 15 12:52:50.599878 sudo[2497]: pam_unix(sudo:session): session closed for user root Jan 15 12:52:50.684141 sshd[2493]: pam_unix(sshd:session): session closed for user core Jan 15 12:52:50.688584 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:39256.service: Deactivated successfully. Jan 15 12:52:50.690270 systemd-logind[1784]: Session 9 logged out. Waiting for processes to exit. Jan 15 12:52:50.691762 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 12:52:50.692764 systemd-logind[1784]: Removed session 9. Jan 15 12:52:59.515527 kubelet[3479]: I0115 12:52:59.515487 3479 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 12:52:59.515970 containerd[1810]: time="2025-01-15T12:52:59.515823414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 12:52:59.516246 kubelet[3479]: I0115 12:52:59.516003 3479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 12:53:00.212421 kubelet[3479]: I0115 12:53:00.212373 3479 topology_manager.go:215] "Topology Admit Handler" podUID="db7214c6-5577-4a2a-9e3a-c9af42f8ddc8" podNamespace="kube-system" podName="kube-proxy-qxdwg" Jan 15 12:53:00.247831 kubelet[3479]: I0115 12:53:00.247583 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db7214c6-5577-4a2a-9e3a-c9af42f8ddc8-xtables-lock\") pod \"kube-proxy-qxdwg\" (UID: \"db7214c6-5577-4a2a-9e3a-c9af42f8ddc8\") " pod="kube-system/kube-proxy-qxdwg" Jan 15 12:53:00.247831 kubelet[3479]: I0115 12:53:00.247657 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db7214c6-5577-4a2a-9e3a-c9af42f8ddc8-lib-modules\") pod \"kube-proxy-qxdwg\" (UID: \"db7214c6-5577-4a2a-9e3a-c9af42f8ddc8\") " pod="kube-system/kube-proxy-qxdwg" Jan 15 12:53:00.247831 kubelet[3479]: I0115 12:53:00.247724 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db7214c6-5577-4a2a-9e3a-c9af42f8ddc8-kube-proxy\") pod \"kube-proxy-qxdwg\" (UID: \"db7214c6-5577-4a2a-9e3a-c9af42f8ddc8\") " pod="kube-system/kube-proxy-qxdwg" Jan 15 12:53:00.247831 kubelet[3479]: I0115 12:53:00.247749 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrsh\" (UniqueName: \"kubernetes.io/projected/db7214c6-5577-4a2a-9e3a-c9af42f8ddc8-kube-api-access-hmrsh\") pod \"kube-proxy-qxdwg\" (UID: \"db7214c6-5577-4a2a-9e3a-c9af42f8ddc8\") " pod="kube-system/kube-proxy-qxdwg" Jan 15 12:53:00.519443 containerd[1810]: time="2025-01-15T12:53:00.519259692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxdwg,Uid:db7214c6-5577-4a2a-9e3a-c9af42f8ddc8,Namespace:kube-system,Attempt:0,}" Jan 15 12:53:00.577107 containerd[1810]: time="2025-01-15T12:53:00.575792097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:00.577107 containerd[1810]: time="2025-01-15T12:53:00.575846577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:00.577107 containerd[1810]: time="2025-01-15T12:53:00.575900537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:00.577107 containerd[1810]: time="2025-01-15T12:53:00.576358176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:00.650254 kubelet[3479]: I0115 12:53:00.650016 3479 topology_manager.go:215] "Topology Admit Handler" podUID="940a6cdb-c840-4657-a97d-d2259f0c31ff" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-qc9mf" Jan 15 12:53:00.652386 containerd[1810]: time="2025-01-15T12:53:00.652234061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxdwg,Uid:db7214c6-5577-4a2a-9e3a-c9af42f8ddc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4cfe51c925c3f1158c2a1baf4b9f774abeec664ec5477a270a80c60464764f7\"" Jan 15 12:53:00.669336 containerd[1810]: time="2025-01-15T12:53:00.667085271Z" level=info msg="CreateContainer within sandbox \"f4cfe51c925c3f1158c2a1baf4b9f774abeec664ec5477a270a80c60464764f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 12:53:00.714644 containerd[1810]: time="2025-01-15T12:53:00.714587815Z" level=info msg="CreateContainer within sandbox \"f4cfe51c925c3f1158c2a1baf4b9f774abeec664ec5477a270a80c60464764f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e38a1273d1cd27decb5dcc9d17aa2ceabc2a1b9f20fd8df700b2f1aa9e8a7888\"" Jan 15 12:53:00.715413 containerd[1810]: time="2025-01-15T12:53:00.715376333Z" level=info msg="StartContainer for \"e38a1273d1cd27decb5dcc9d17aa2ceabc2a1b9f20fd8df700b2f1aa9e8a7888\"" Jan 15 12:53:00.754393 kubelet[3479]: I0115 12:53:00.754338 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvhlw\" (UniqueName: \"kubernetes.io/projected/940a6cdb-c840-4657-a97d-d2259f0c31ff-kube-api-access-zvhlw\") pod \"tigera-operator-c7ccbd65-qc9mf\" (UID: \"940a6cdb-c840-4657-a97d-d2259f0c31ff\") " pod="tigera-operator/tigera-operator-c7ccbd65-qc9mf" Jan 15 12:53:00.754990 kubelet[3479]: I0115 12:53:00.754676 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/940a6cdb-c840-4657-a97d-d2259f0c31ff-var-lib-calico\") pod \"tigera-operator-c7ccbd65-qc9mf\" (UID: \"940a6cdb-c840-4657-a97d-d2259f0c31ff\") " pod="tigera-operator/tigera-operator-c7ccbd65-qc9mf" Jan 15 12:53:00.769311 containerd[1810]: time="2025-01-15T12:53:00.769159384Z" level=info msg="StartContainer for \"e38a1273d1cd27decb5dcc9d17aa2ceabc2a1b9f20fd8df700b2f1aa9e8a7888\" returns successfully" Jan 15 12:53:00.952385 kubelet[3479]: I0115 12:53:00.952303 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qxdwg" podStartSLOduration=0.952252862 podStartE2EDuration="952.252862ms" podCreationTimestamp="2025-01-15 12:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:53:00.950053868 +0000 UTC m=+15.196532613" watchObservedRunningTime="2025-01-15 12:53:00.952252862 +0000 UTC m=+15.198731607" Jan 15 12:53:00.961482 containerd[1810]: time="2025-01-15T12:53:00.960448123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-qc9mf,Uid:940a6cdb-c840-4657-a97d-d2259f0c31ff,Namespace:tigera-operator,Attempt:0,}" Jan 15 12:53:01.378261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819344631.mount: Deactivated successfully. Jan 15 12:53:01.981854 containerd[1810]: time="2025-01-15T12:53:01.981545824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:01.981854 containerd[1810]: time="2025-01-15T12:53:01.981607064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:01.981854 containerd[1810]: time="2025-01-15T12:53:01.981622304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:01.981854 containerd[1810]: time="2025-01-15T12:53:01.981707744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:02.032566 containerd[1810]: time="2025-01-15T12:53:02.032515666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-qc9mf,Uid:940a6cdb-c840-4657-a97d-d2259f0c31ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b59a13fb731acad94cb296e11aa9f5167497e2094f91ae79607f100c9962af3\"" Jan 15 12:53:02.035866 containerd[1810]: time="2025-01-15T12:53:02.035793698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 12:53:12.212066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607876569.mount: Deactivated successfully. Jan 15 12:53:18.375044 containerd[1810]: time="2025-01-15T12:53:18.374286284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:18.377573 containerd[1810]: time="2025-01-15T12:53:18.377511037Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126012" Jan 15 12:53:18.381415 containerd[1810]: time="2025-01-15T12:53:18.381360789Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:18.386708 containerd[1810]: time="2025-01-15T12:53:18.386623058Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:18.387535 containerd[1810]: time="2025-01-15T12:53:18.387291496Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 16.351457318s" Jan 15 12:53:18.387535 containerd[1810]: time="2025-01-15T12:53:18.387326496Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 15 12:53:18.389794 containerd[1810]: time="2025-01-15T12:53:18.389560811Z" level=info msg="CreateContainer within sandbox \"6b59a13fb731acad94cb296e11aa9f5167497e2094f91ae79607f100c9962af3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 12:53:18.429814 containerd[1810]: time="2025-01-15T12:53:18.429735284Z" level=info msg="CreateContainer within sandbox \"6b59a13fb731acad94cb296e11aa9f5167497e2094f91ae79607f100c9962af3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9f27e73c34967ee01d4d86f6527cd1bd5064a885a81fc3d403215e7496211492\"" Jan 15 12:53:18.430495 containerd[1810]: time="2025-01-15T12:53:18.430347882Z" level=info msg="StartContainer for \"9f27e73c34967ee01d4d86f6527cd1bd5064a885a81fc3d403215e7496211492\"" Jan 15 12:53:18.484874 containerd[1810]: time="2025-01-15T12:53:18.484718004Z" level=info msg="StartContainer for \"9f27e73c34967ee01d4d86f6527cd1bd5064a885a81fc3d403215e7496211492\" returns successfully" Jan 15 12:53:18.981582 kubelet[3479]: I0115 12:53:18.981530 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-qc9mf" podStartSLOduration=2.62778613 podStartE2EDuration="18.981485962s" podCreationTimestamp="2025-01-15 12:53:00 +0000 UTC" firstStartedPulling="2025-01-15 12:53:02.033922263 +0000 UTC m=+16.280401008" lastFinishedPulling="2025-01-15 12:53:18.387622095 +0000 UTC m=+32.634100840" observedRunningTime="2025-01-15 12:53:18.981285242 +0000 UTC m=+33.227763987" watchObservedRunningTime="2025-01-15 12:53:18.981485962 +0000 UTC m=+33.227964707" Jan 15 12:53:22.646105 kubelet[3479]: I0115 12:53:22.646030 3479 topology_manager.go:215] "Topology Admit Handler" podUID="2323c948-e498-4c44-993b-4186a0d639d2" podNamespace="calico-system" podName="calico-typha-577dcdff-t4b9l" Jan 15 12:53:22.690643 kubelet[3479]: I0115 12:53:22.690519 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2323c948-e498-4c44-993b-4186a0d639d2-typha-certs\") pod \"calico-typha-577dcdff-t4b9l\" (UID: \"2323c948-e498-4c44-993b-4186a0d639d2\") " pod="calico-system/calico-typha-577dcdff-t4b9l" Jan 15 12:53:22.690643 kubelet[3479]: I0115 12:53:22.690557 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2323c948-e498-4c44-993b-4186a0d639d2-tigera-ca-bundle\") pod \"calico-typha-577dcdff-t4b9l\" (UID: \"2323c948-e498-4c44-993b-4186a0d639d2\") " pod="calico-system/calico-typha-577dcdff-t4b9l" Jan 15 12:53:22.690643 kubelet[3479]: I0115 12:53:22.690582 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7rcr\" (UniqueName: \"kubernetes.io/projected/2323c948-e498-4c44-993b-4186a0d639d2-kube-api-access-q7rcr\") pod \"calico-typha-577dcdff-t4b9l\" (UID: \"2323c948-e498-4c44-993b-4186a0d639d2\") " pod="calico-system/calico-typha-577dcdff-t4b9l" Jan 15 12:53:22.751177 kubelet[3479]: I0115 12:53:22.751120 3479 topology_manager.go:215] "Topology Admit Handler" podUID="06df09da-6694-46f0-b5c5-27e65ea9bda0" podNamespace="calico-system" podName="calico-node-ck4tx" Jan 15 12:53:22.792623 kubelet[3479]: I0115 12:53:22.792502 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztwj\" (UniqueName: \"kubernetes.io/projected/06df09da-6694-46f0-b5c5-27e65ea9bda0-kube-api-access-qztwj\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.792623 kubelet[3479]: I0115 12:53:22.792543 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-xtables-lock\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.792623 kubelet[3479]: I0115 12:53:22.792565 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-policysync\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795229 kubelet[3479]: I0115 12:53:22.792806 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-var-run-calico\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795229 kubelet[3479]: I0115 12:53:22.792840 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-cni-bin-dir\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795229 kubelet[3479]: I0115 12:53:22.792862 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-cni-net-dir\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795229 kubelet[3479]: I0115 12:53:22.792881 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06df09da-6694-46f0-b5c5-27e65ea9bda0-tigera-ca-bundle\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795229 kubelet[3479]: I0115 12:53:22.792898 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-cni-log-dir\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795414 kubelet[3479]: I0115 12:53:22.792922 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-flexvol-driver-host\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795414 kubelet[3479]: I0115 12:53:22.792964 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-var-lib-calico\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795414 kubelet[3479]: I0115 12:53:22.792994 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06df09da-6694-46f0-b5c5-27e65ea9bda0-lib-modules\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.795414 kubelet[3479]: I0115 12:53:22.793026 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/06df09da-6694-46f0-b5c5-27e65ea9bda0-node-certs\") pod \"calico-node-ck4tx\" (UID: \"06df09da-6694-46f0-b5c5-27e65ea9bda0\") " pod="calico-system/calico-node-ck4tx" Jan 15 12:53:22.868584 kubelet[3479]: I0115 12:53:22.868532 3479 topology_manager.go:215] "Topology Admit Handler" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" podNamespace="calico-system" podName="csi-node-driver-z9tjc" Jan 15 12:53:22.869095 kubelet[3479]: E0115 12:53:22.868835 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:22.895741 kubelet[3479]: I0115 12:53:22.894591 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e31c47b-ec08-47f1-903e-14f9c9ca8a9b-socket-dir\") pod \"csi-node-driver-z9tjc\" (UID: \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\") " pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:22.895741 kubelet[3479]: I0115 12:53:22.894682 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e31c47b-ec08-47f1-903e-14f9c9ca8a9b-registration-dir\") pod \"csi-node-driver-z9tjc\" (UID: \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\") " pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:22.895741 kubelet[3479]: I0115 12:53:22.894717 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e31c47b-ec08-47f1-903e-14f9c9ca8a9b-varrun\") pod \"csi-node-driver-z9tjc\" (UID: \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\") " pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:22.895741 kubelet[3479]: I0115 12:53:22.894788 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdlb\" (UniqueName: \"kubernetes.io/projected/2e31c47b-ec08-47f1-903e-14f9c9ca8a9b-kube-api-access-4xdlb\") pod \"csi-node-driver-z9tjc\" (UID: \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\") " pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:22.895741 kubelet[3479]: I0115 12:53:22.894823 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e31c47b-ec08-47f1-903e-14f9c9ca8a9b-kubelet-dir\") pod \"csi-node-driver-z9tjc\" (UID: \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\") " pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:22.899409 kubelet[3479]: E0115 12:53:22.898710 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.900046 kubelet[3479]: W0115 12:53:22.899980 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.901141 kubelet[3479]: E0115 12:53:22.901103 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.901527 kubelet[3479]: E0115 12:53:22.901486 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.901527 kubelet[3479]: W0115 12:53:22.901498 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.901713 kubelet[3479]: E0115 12:53:22.901513 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.902076 kubelet[3479]: E0115 12:53:22.902061 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.902274 kubelet[3479]: W0115 12:53:22.902112 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.902274 kubelet[3479]: E0115 12:53:22.902129 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.903670 kubelet[3479]: E0115 12:53:22.903603 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.903670 kubelet[3479]: W0115 12:53:22.903619 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.903788 kubelet[3479]: E0115 12:53:22.903716 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.903987 kubelet[3479]: E0115 12:53:22.903967 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.903987 kubelet[3479]: W0115 12:53:22.903985 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.904120 kubelet[3479]: E0115 12:53:22.904078 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.904413 kubelet[3479]: E0115 12:53:22.904393 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.904532 kubelet[3479]: W0115 12:53:22.904511 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.905782 kubelet[3479]: E0115 12:53:22.904815 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.905782 kubelet[3479]: E0115 12:53:22.905071 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.905782 kubelet[3479]: W0115 12:53:22.905083 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.905782 kubelet[3479]: E0115 12:53:22.905143 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.905782 kubelet[3479]: E0115 12:53:22.905760 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.905782 kubelet[3479]: W0115 12:53:22.905774 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.906071 kubelet[3479]: E0115 12:53:22.906045 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.906261 kubelet[3479]: E0115 12:53:22.906240 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.906324 kubelet[3479]: W0115 12:53:22.906268 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.907388 kubelet[3479]: E0115 12:53:22.906996 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.907388 kubelet[3479]: E0115 12:53:22.907007 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.907388 kubelet[3479]: W0115 12:53:22.907019 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.907388 kubelet[3479]: E0115 12:53:22.907084 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.908326 kubelet[3479]: E0115 12:53:22.908038 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.908326 kubelet[3479]: W0115 12:53:22.908060 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.908326 kubelet[3479]: E0115 12:53:22.908250 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.909235 kubelet[3479]: E0115 12:53:22.908640 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.909235 kubelet[3479]: W0115 12:53:22.908662 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.909235 kubelet[3479]: E0115 12:53:22.908701 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.909365 kubelet[3479]: E0115 12:53:22.909306 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.909365 kubelet[3479]: W0115 12:53:22.909319 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.909365 kubelet[3479]: E0115 12:53:22.909335 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.911862 kubelet[3479]: E0115 12:53:22.910639 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.911862 kubelet[3479]: W0115 12:53:22.910658 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.911862 kubelet[3479]: E0115 12:53:22.910674 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.919231 kubelet[3479]: E0115 12:53:22.918479 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.919231 kubelet[3479]: W0115 12:53:22.918500 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.919231 kubelet[3479]: E0115 12:53:22.918522 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.929809 kubelet[3479]: E0115 12:53:22.926968 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.929809 kubelet[3479]: W0115 12:53:22.927003 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.929809 kubelet[3479]: E0115 12:53:22.927024 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.953131 containerd[1810]: time="2025-01-15T12:53:22.953060793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-577dcdff-t4b9l,Uid:2323c948-e498-4c44-993b-4186a0d639d2,Namespace:calico-system,Attempt:0,}" Jan 15 12:53:22.996865 kubelet[3479]: E0115 12:53:22.996485 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.996865 kubelet[3479]: W0115 12:53:22.996507 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.996865 kubelet[3479]: E0115 12:53:22.996532 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.996865 kubelet[3479]: E0115 12:53:22.996756 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.996865 kubelet[3479]: W0115 12:53:22.996765 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.996865 kubelet[3479]: E0115 12:53:22.996783 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.997341 kubelet[3479]: E0115 12:53:22.997004 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.997341 kubelet[3479]: W0115 12:53:22.997018 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.997341 kubelet[3479]: E0115 12:53:22.997040 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.997502 kubelet[3479]: E0115 12:53:22.997481 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.997541 kubelet[3479]: W0115 12:53:22.997503 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.997541 kubelet[3479]: E0115 12:53:22.997521 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.998303 kubelet[3479]: E0115 12:53:22.998288 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.998486 kubelet[3479]: W0115 12:53:22.998382 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.998486 kubelet[3479]: E0115 12:53:22.998413 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.998814 kubelet[3479]: E0115 12:53:22.998755 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.998814 kubelet[3479]: W0115 12:53:22.998767 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.998902 kubelet[3479]: E0115 12:53:22.998812 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.999225 kubelet[3479]: E0115 12:53:22.999086 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.999225 kubelet[3479]: W0115 12:53:22.999099 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.999536 kubelet[3479]: E0115 12:53:22.999438 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.999536 kubelet[3479]: W0115 12:53:22.999450 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.999536 kubelet[3479]: E0115 12:53:22.999466 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:22.999865 kubelet[3479]: E0115 12:53:22.999738 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:22.999865 kubelet[3479]: W0115 12:53:22.999749 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:22.999865 kubelet[3479]: E0115 12:53:22.999762 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.000285 kubelet[3479]: E0115 12:53:23.000085 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.000285 kubelet[3479]: W0115 12:53:23.000098 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.000285 kubelet[3479]: E0115 12:53:23.000149 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.001141 kubelet[3479]: E0115 12:53:23.000992 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.001141 kubelet[3479]: W0115 12:53:23.001010 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.001141 kubelet[3479]: E0115 12:53:23.001064 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.001141 kubelet[3479]: E0115 12:53:23.001097 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.001843 kubelet[3479]: E0115 12:53:23.001720 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.001843 kubelet[3479]: W0115 12:53:23.001734 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.001843 kubelet[3479]: E0115 12:53:23.001772 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.002174 kubelet[3479]: E0115 12:53:23.002145 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.002650 kubelet[3479]: W0115 12:53:23.002209 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.002650 kubelet[3479]: E0115 12:53:23.002226 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.003275 kubelet[3479]: E0115 12:53:23.003039 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.003275 kubelet[3479]: W0115 12:53:23.003054 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.003275 kubelet[3479]: E0115 12:53:23.003077 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.004609 kubelet[3479]: E0115 12:53:23.004098 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.004609 kubelet[3479]: W0115 12:53:23.004133 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.004609 kubelet[3479]: E0115 12:53:23.004320 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.006575 kubelet[3479]: E0115 12:53:23.006471 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.006575 kubelet[3479]: W0115 12:53:23.006485 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.007000 kubelet[3479]: E0115 12:53:23.006964 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.007203 kubelet[3479]: E0115 12:53:23.007143 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.007886 kubelet[3479]: W0115 12:53:23.007812 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.007886 kubelet[3479]: E0115 12:53:23.007861 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.008676 kubelet[3479]: E0115 12:53:23.008145 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.009881 kubelet[3479]: W0115 12:53:23.008801 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.009881 kubelet[3479]: E0115 12:53:23.008877 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.012367 kubelet[3479]: E0115 12:53:23.011486 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.012367 kubelet[3479]: W0115 12:53:23.011502 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.012367 kubelet[3479]: E0115 12:53:23.011667 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.012367 kubelet[3479]: W0115 12:53:23.011675 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.012367 kubelet[3479]: E0115 12:53:23.011905 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.012763 kubelet[3479]: E0115 12:53:23.012627 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.012763 kubelet[3479]: E0115 12:53:23.012679 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.012763 kubelet[3479]: W0115 12:53:23.012686 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.012763 kubelet[3479]: E0115 12:53:23.012697 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.013365 kubelet[3479]: E0115 12:53:23.013350 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.013577 kubelet[3479]: W0115 12:53:23.013441 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.013577 kubelet[3479]: E0115 12:53:23.013462 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.014126 kubelet[3479]: E0115 12:53:23.014111 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.014360 kubelet[3479]: W0115 12:53:23.014247 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.014360 kubelet[3479]: E0115 12:53:23.014271 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.014735 kubelet[3479]: E0115 12:53:23.014545 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.014826 kubelet[3479]: W0115 12:53:23.014813 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.015345 kubelet[3479]: E0115 12:53:23.015025 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.016309 kubelet[3479]: E0115 12:53:23.016270 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.016411 kubelet[3479]: W0115 12:53:23.016397 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.016467 kubelet[3479]: E0115 12:53:23.016459 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.019148 kubelet[3479]: E0115 12:53:23.019126 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:23.019148 kubelet[3479]: W0115 12:53:23.019142 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:23.019284 kubelet[3479]: E0115 12:53:23.019167 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:23.022090 containerd[1810]: time="2025-01-15T12:53:23.021981923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:23.022090 containerd[1810]: time="2025-01-15T12:53:23.022040442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:23.022090 containerd[1810]: time="2025-01-15T12:53:23.022059442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:23.022409 containerd[1810]: time="2025-01-15T12:53:23.022151122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:23.057827 containerd[1810]: time="2025-01-15T12:53:23.057747925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ck4tx,Uid:06df09da-6694-46f0-b5c5-27e65ea9bda0,Namespace:calico-system,Attempt:0,}" Jan 15 12:53:23.064292 containerd[1810]: time="2025-01-15T12:53:23.064247231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-577dcdff-t4b9l,Uid:2323c948-e498-4c44-993b-4186a0d639d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2011ff7b5b53debee3d984604e2fe93fdff67acd759d1e8d328d0717752b4d44\"" Jan 15 12:53:23.066067 containerd[1810]: time="2025-01-15T12:53:23.065920667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 12:53:23.102072 containerd[1810]: time="2025-01-15T12:53:23.101356910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:23.102072 containerd[1810]: time="2025-01-15T12:53:23.102037788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:23.102802 containerd[1810]: time="2025-01-15T12:53:23.102052788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:23.102802 containerd[1810]: time="2025-01-15T12:53:23.102161868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:23.144616 containerd[1810]: time="2025-01-15T12:53:23.144566056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ck4tx,Uid:06df09da-6694-46f0-b5c5-27e65ea9bda0,Namespace:calico-system,Attempt:0,} returns sandbox id \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\"" Jan 15 12:53:24.275626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189479569.mount: Deactivated successfully. Jan 15 12:53:24.863093 kubelet[3479]: E0115 12:53:24.862930 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:24.968170 containerd[1810]: time="2025-01-15T12:53:24.967474606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:24.971214 containerd[1810]: time="2025-01-15T12:53:24.971166438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 15 12:53:24.977263 containerd[1810]: time="2025-01-15T12:53:24.977178584Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:24.984987 containerd[1810]: time="2025-01-15T12:53:24.984935288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:24.986062 containerd[1810]: time="2025-01-15T12:53:24.985906485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.919951819s" Jan 15 12:53:24.986062 containerd[1810]: time="2025-01-15T12:53:24.985944485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 15 12:53:24.991124 containerd[1810]: time="2025-01-15T12:53:24.988122761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 12:53:24.997126 containerd[1810]: time="2025-01-15T12:53:24.996994701Z" level=info msg="CreateContainer within sandbox \"2011ff7b5b53debee3d984604e2fe93fdff67acd759d1e8d328d0717752b4d44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 12:53:25.048690 containerd[1810]: time="2025-01-15T12:53:25.048630909Z" level=info msg="CreateContainer within sandbox \"2011ff7b5b53debee3d984604e2fe93fdff67acd759d1e8d328d0717752b4d44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d71a18036525ffae531777015bfc8938524e3306bf65bff8d42f27e83999da8a\"" Jan 15 12:53:25.050225 containerd[1810]: time="2025-01-15T12:53:25.049851946Z" level=info msg="StartContainer for \"d71a18036525ffae531777015bfc8938524e3306bf65bff8d42f27e83999da8a\"" Jan 15 12:53:25.105660 containerd[1810]: time="2025-01-15T12:53:25.105608585Z" level=info msg="StartContainer for \"d71a18036525ffae531777015bfc8938524e3306bf65bff8d42f27e83999da8a\" returns successfully" Jan 15 12:53:26.007499 kubelet[3479]: E0115 12:53:26.007260 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.007499 kubelet[3479]: W0115 12:53:26.007484 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.007512 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.007873 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010019 kubelet[3479]: W0115 12:53:26.007885 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.007899 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.008268 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010019 kubelet[3479]: W0115 12:53:26.008280 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.008294 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.008589 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010019 kubelet[3479]: W0115 12:53:26.008633 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010019 kubelet[3479]: E0115 12:53:26.008648 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.008919 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010519 kubelet[3479]: W0115 12:53:26.008990 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009003 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009138 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010519 kubelet[3479]: W0115 12:53:26.009151 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009299 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009450 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010519 kubelet[3479]: W0115 12:53:26.009467 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009479 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010519 kubelet[3479]: E0115 12:53:26.009728 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010861 kubelet[3479]: W0115 12:53:26.009736 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010861 kubelet[3479]: E0115 12:53:26.009747 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010861 kubelet[3479]: E0115 12:53:26.010584 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010861 kubelet[3479]: W0115 12:53:26.010594 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010861 kubelet[3479]: E0115 12:53:26.010608 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.010861 kubelet[3479]: E0115 12:53:26.010740 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.010861 kubelet[3479]: W0115 12:53:26.010748 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.010861 kubelet[3479]: E0115 12:53:26.010828 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.011257 kubelet[3479]: E0115 12:53:26.011238 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.011257 kubelet[3479]: W0115 12:53:26.011249 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.011424 kubelet[3479]: E0115 12:53:26.011367 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.011550 kubelet[3479]: E0115 12:53:26.011513 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.011550 kubelet[3479]: W0115 12:53:26.011529 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.011810 kubelet[3479]: E0115 12:53:26.011544 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.011944 kubelet[3479]: I0115 12:53:26.011928 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-577dcdff-t4b9l" podStartSLOduration=2.090803981 podStartE2EDuration="4.011504718s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:23.065557468 +0000 UTC m=+37.312036213" lastFinishedPulling="2025-01-15 12:53:24.986258245 +0000 UTC m=+39.232736950" observedRunningTime="2025-01-15 12:53:26.00610485 +0000 UTC m=+40.252583595" watchObservedRunningTime="2025-01-15 12:53:26.011504718 +0000 UTC m=+40.257983463" Jan 15 12:53:26.012156 kubelet[3479]: E0115 12:53:26.012122 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.012156 kubelet[3479]: W0115 12:53:26.012149 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.012360 kubelet[3479]: E0115 12:53:26.012165 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.012549 kubelet[3479]: E0115 12:53:26.012503 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.012549 kubelet[3479]: W0115 12:53:26.012535 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.012665 kubelet[3479]: E0115 12:53:26.012550 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.012816 kubelet[3479]: E0115 12:53:26.012795 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.012816 kubelet[3479]: W0115 12:53:26.012809 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.012900 kubelet[3479]: E0115 12:53:26.012820 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.024695 kubelet[3479]: E0115 12:53:26.024520 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.024695 kubelet[3479]: W0115 12:53:26.024691 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.024867 kubelet[3479]: E0115 12:53:26.024833 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.025496 kubelet[3479]: E0115 12:53:26.025471 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.025496 kubelet[3479]: W0115 12:53:26.025489 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.025840 kubelet[3479]: E0115 12:53:26.025511 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.025840 kubelet[3479]: E0115 12:53:26.025710 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.025840 kubelet[3479]: W0115 12:53:26.025719 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.025840 kubelet[3479]: E0115 12:53:26.025731 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.026736 kubelet[3479]: E0115 12:53:26.026343 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.026736 kubelet[3479]: W0115 12:53:26.026356 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.026736 kubelet[3479]: E0115 12:53:26.026384 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.026926 kubelet[3479]: E0115 12:53:26.026912 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.027284 kubelet[3479]: W0115 12:53:26.027169 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.027284 kubelet[3479]: E0115 12:53:26.027279 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.027435 kubelet[3479]: E0115 12:53:26.027424 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.027733 kubelet[3479]: W0115 12:53:26.027717 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.028085 kubelet[3479]: E0115 12:53:26.028011 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.028085 kubelet[3479]: W0115 12:53:26.028025 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.028285 kubelet[3479]: E0115 12:53:26.028272 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.028433 kubelet[3479]: W0115 12:53:26.028332 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.028433 kubelet[3479]: E0115 12:53:26.028350 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.031886 kubelet[3479]: E0115 12:53:26.030124 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.031886 kubelet[3479]: W0115 12:53:26.030141 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.031886 kubelet[3479]: E0115 12:53:26.030159 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.032419 kubelet[3479]: E0115 12:53:26.032315 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.032684 kubelet[3479]: W0115 12:53:26.032566 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.032952 kubelet[3479]: E0115 12:53:26.032854 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.033526 kubelet[3479]: E0115 12:53:26.033509 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.034895 kubelet[3479]: E0115 12:53:26.034877 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.035000 kubelet[3479]: W0115 12:53:26.034988 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.035056 kubelet[3479]: E0115 12:53:26.035048 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.035340 kubelet[3479]: E0115 12:53:26.035327 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.037230 kubelet[3479]: W0115 12:53:26.035503 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.037230 kubelet[3479]: E0115 12:53:26.035525 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.037460 kubelet[3479]: E0115 12:53:26.037447 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.037522 kubelet[3479]: W0115 12:53:26.037512 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.037770 kubelet[3479]: E0115 12:53:26.037570 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.037974 kubelet[3479]: E0115 12:53:26.037891 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.037974 kubelet[3479]: W0115 12:53:26.037903 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.037974 kubelet[3479]: E0115 12:53:26.037916 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.037974 kubelet[3479]: E0115 12:53:26.037938 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.039035 kubelet[3479]: E0115 12:53:26.038963 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.039295 kubelet[3479]: W0115 12:53:26.039178 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.039295 kubelet[3479]: E0115 12:53:26.039224 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.039617 kubelet[3479]: E0115 12:53:26.039572 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.039617 kubelet[3479]: W0115 12:53:26.039586 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.039617 kubelet[3479]: E0115 12:53:26.039599 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.040795 kubelet[3479]: E0115 12:53:26.040711 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.041185 kubelet[3479]: W0115 12:53:26.041082 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.041185 kubelet[3479]: E0115 12:53:26.041106 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.041911 kubelet[3479]: E0115 12:53:26.041858 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:53:26.041911 kubelet[3479]: W0115 12:53:26.041870 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:53:26.041911 kubelet[3479]: E0115 12:53:26.041885 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:53:26.261384 containerd[1810]: time="2025-01-15T12:53:26.260606730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:26.263545 containerd[1810]: time="2025-01-15T12:53:26.263509284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 15 12:53:26.266607 containerd[1810]: time="2025-01-15T12:53:26.266544157Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:26.272058 containerd[1810]: time="2025-01-15T12:53:26.272026065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:26.272998 containerd[1810]: time="2025-01-15T12:53:26.272874543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.276148961s" Jan 15 12:53:26.272998 containerd[1810]: time="2025-01-15T12:53:26.272908303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 15 12:53:26.275047 containerd[1810]: time="2025-01-15T12:53:26.275014978Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 12:53:26.329490 containerd[1810]: time="2025-01-15T12:53:26.329403419Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59\"" Jan 15 12:53:26.331174 containerd[1810]: time="2025-01-15T12:53:26.331089655Z" level=info msg="StartContainer for \"067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59\"" Jan 15 12:53:26.384468 containerd[1810]: time="2025-01-15T12:53:26.383319660Z" level=info msg="StartContainer for \"067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59\" returns successfully" Jan 15 12:53:26.414105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59-rootfs.mount: Deactivated successfully. Jan 15 12:53:26.862581 kubelet[3479]: E0115 12:53:26.862469 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:27.267783 containerd[1810]: time="2025-01-15T12:53:27.267722713Z" level=info msg="shim disconnected" id=067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59 namespace=k8s.io Jan 15 12:53:27.267783 containerd[1810]: time="2025-01-15T12:53:27.267776873Z" level=warning msg="cleaning up after shim disconnected" id=067c30a166b2645492b5947212cf92370e02ccb3030c3fe69f73593922cacd59 namespace=k8s.io Jan 15 12:53:27.267783 containerd[1810]: time="2025-01-15T12:53:27.267785433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:53:27.995906 containerd[1810]: time="2025-01-15T12:53:27.995544871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 12:53:28.862628 kubelet[3479]: E0115 12:53:28.862576 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:30.863784 kubelet[3479]: E0115 12:53:30.863208 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:31.181264 containerd[1810]: time="2025-01-15T12:53:31.181054058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:31.183263 containerd[1810]: time="2025-01-15T12:53:31.183223933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 15 12:53:31.185482 containerd[1810]: time="2025-01-15T12:53:31.185434968Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:31.189233 containerd[1810]: time="2025-01-15T12:53:31.189182520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:31.190600 containerd[1810]: time="2025-01-15T12:53:31.190109038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.194464648s" Jan 15 12:53:31.190600 containerd[1810]: time="2025-01-15T12:53:31.190141998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 15 12:53:31.191970 containerd[1810]: time="2025-01-15T12:53:31.191924714Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 12:53:31.224771 containerd[1810]: time="2025-01-15T12:53:31.224702962Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9\"" Jan 15 12:53:31.225486 containerd[1810]: time="2025-01-15T12:53:31.225323081Z" level=info msg="StartContainer for \"3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9\"" Jan 15 12:53:31.283718 containerd[1810]: time="2025-01-15T12:53:31.283671072Z" level=info msg="StartContainer for \"3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9\" returns successfully" Jan 15 12:53:32.339600 containerd[1810]: time="2025-01-15T12:53:32.339481228Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:53:32.349741 kubelet[3479]: I0115 12:53:32.349633 3479 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 12:53:32.373141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9-rootfs.mount: Deactivated successfully. Jan 15 12:53:32.390983 kubelet[3479]: I0115 12:53:32.390898 3479 topology_manager.go:215] "Topology Admit Handler" podUID="9197b20e-feec-468b-98f4-a4ecccedcf24" podNamespace="calico-system" podName="calico-kube-controllers-666f9cb696-hwrgm" Jan 15 12:53:32.404056 kubelet[3479]: I0115 12:53:32.400288 3479 topology_manager.go:215] "Topology Admit Handler" podUID="23ead334-e963-4418-9ca2-7ff1cba5daa6" podNamespace="kube-system" podName="coredns-76f75df574-frkvk" Jan 15 12:53:32.405729 kubelet[3479]: I0115 12:53:32.405705 3479 topology_manager.go:215] "Topology Admit Handler" podUID="b34cd598-6eb1-42df-917d-effcdfc5a29b" podNamespace="calico-apiserver" podName="calico-apiserver-6b49cb6669-qcv6d" Jan 15 12:53:32.405992 kubelet[3479]: I0115 12:53:32.405796 3479 topology_manager.go:215] "Topology Admit Handler" podUID="853a9ed3-353b-4e0d-99d5-673d9014d6e9" podNamespace="calico-apiserver" podName="calico-apiserver-6b49cb6669-sq7jw" Jan 15 12:53:32.408206 kubelet[3479]: I0115 12:53:32.408160 3479 topology_manager.go:215] "Topology Admit Handler" podUID="d4339fa4-8d81-4248-bba4-5dd6857b8e52" podNamespace="kube-system" podName="coredns-76f75df574-ztv7l" Jan 15 12:53:32.468690 kubelet[3479]: I0115 12:53:32.468012 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492l2\" (UniqueName: \"kubernetes.io/projected/9197b20e-feec-468b-98f4-a4ecccedcf24-kube-api-access-492l2\") pod \"calico-kube-controllers-666f9cb696-hwrgm\" (UID: \"9197b20e-feec-468b-98f4-a4ecccedcf24\") " pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" Jan 15 12:53:32.468690 kubelet[3479]: I0115 12:53:32.468148 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9197b20e-feec-468b-98f4-a4ecccedcf24-tigera-ca-bundle\") pod \"calico-kube-controllers-666f9cb696-hwrgm\" (UID: \"9197b20e-feec-468b-98f4-a4ecccedcf24\") " pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" Jan 15 12:53:32.468690 kubelet[3479]: I0115 12:53:32.468181 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7db\" (UniqueName: \"kubernetes.io/projected/b34cd598-6eb1-42df-917d-effcdfc5a29b-kube-api-access-ng7db\") pod \"calico-apiserver-6b49cb6669-qcv6d\" (UID: \"b34cd598-6eb1-42df-917d-effcdfc5a29b\") " pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" Jan 15 12:53:32.468690 kubelet[3479]: I0115 12:53:32.468323 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2bl\" (UniqueName: \"kubernetes.io/projected/853a9ed3-353b-4e0d-99d5-673d9014d6e9-kube-api-access-wr2bl\") pod \"calico-apiserver-6b49cb6669-sq7jw\" (UID: \"853a9ed3-353b-4e0d-99d5-673d9014d6e9\") " pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" Jan 15 12:53:32.468690 kubelet[3479]: I0115 12:53:32.468401 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tlpb\" (UniqueName: \"kubernetes.io/projected/23ead334-e963-4418-9ca2-7ff1cba5daa6-kube-api-access-4tlpb\") pod \"coredns-76f75df574-frkvk\" (UID: \"23ead334-e963-4418-9ca2-7ff1cba5daa6\") " pod="kube-system/coredns-76f75df574-frkvk" Jan 15 12:53:32.468962 kubelet[3479]: I0115 12:53:32.468509 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4339fa4-8d81-4248-bba4-5dd6857b8e52-config-volume\") pod \"coredns-76f75df574-ztv7l\" (UID: \"d4339fa4-8d81-4248-bba4-5dd6857b8e52\") " pod="kube-system/coredns-76f75df574-ztv7l" Jan 15 12:53:32.468962 kubelet[3479]: I0115 12:53:32.468534 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxtg\" (UniqueName: \"kubernetes.io/projected/d4339fa4-8d81-4248-bba4-5dd6857b8e52-kube-api-access-2dxtg\") pod \"coredns-76f75df574-ztv7l\" (UID: \"d4339fa4-8d81-4248-bba4-5dd6857b8e52\") " pod="kube-system/coredns-76f75df574-ztv7l" Jan 15 12:53:32.468962 kubelet[3479]: I0115 12:53:32.468559 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/853a9ed3-353b-4e0d-99d5-673d9014d6e9-calico-apiserver-certs\") pod \"calico-apiserver-6b49cb6669-sq7jw\" (UID: \"853a9ed3-353b-4e0d-99d5-673d9014d6e9\") " pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" Jan 15 12:53:32.468962 kubelet[3479]: I0115 12:53:32.468803 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23ead334-e963-4418-9ca2-7ff1cba5daa6-config-volume\") pod \"coredns-76f75df574-frkvk\" (UID: \"23ead334-e963-4418-9ca2-7ff1cba5daa6\") " pod="kube-system/coredns-76f75df574-frkvk" Jan 15 12:53:32.469080 kubelet[3479]: I0115 12:53:32.468997 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b34cd598-6eb1-42df-917d-effcdfc5a29b-calico-apiserver-certs\") pod \"calico-apiserver-6b49cb6669-qcv6d\" (UID: \"b34cd598-6eb1-42df-917d-effcdfc5a29b\") " pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" Jan 15 12:53:32.697513 containerd[1810]: time="2025-01-15T12:53:32.697459720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f9cb696-hwrgm,Uid:9197b20e-feec-468b-98f4-a4ecccedcf24,Namespace:calico-system,Attempt:0,}" Jan 15 12:53:32.718718 containerd[1810]: time="2025-01-15T12:53:32.718669393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-sq7jw,Uid:853a9ed3-353b-4e0d-99d5-673d9014d6e9,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:53:32.723527 containerd[1810]: time="2025-01-15T12:53:32.723492022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-frkvk,Uid:23ead334-e963-4418-9ca2-7ff1cba5daa6,Namespace:kube-system,Attempt:0,}" Jan 15 12:53:32.728471 containerd[1810]: time="2025-01-15T12:53:32.728428812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ztv7l,Uid:d4339fa4-8d81-4248-bba4-5dd6857b8e52,Namespace:kube-system,Attempt:0,}" Jan 15 12:53:32.734161 containerd[1810]: time="2025-01-15T12:53:32.734035839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-qcv6d,Uid:b34cd598-6eb1-42df-917d-effcdfc5a29b,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:53:32.865602 containerd[1810]: time="2025-01-15T12:53:32.865556430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9tjc,Uid:2e31c47b-ec08-47f1-903e-14f9c9ca8a9b,Namespace:calico-system,Attempt:0,}" Jan 15 12:53:33.560383 containerd[1810]: time="2025-01-15T12:53:33.560285352Z" level=info msg="shim disconnected" id=3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9 namespace=k8s.io Jan 15 12:53:33.560383 containerd[1810]: time="2025-01-15T12:53:33.560342912Z" level=warning msg="cleaning up after shim disconnected" id=3ed2a57464d234ba025b04565af2d13c263e52a2ef5518d0c2779fa802fc2ac9 namespace=k8s.io Jan 15 12:53:33.560383 containerd[1810]: time="2025-01-15T12:53:33.560351832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:53:33.776275 containerd[1810]: time="2025-01-15T12:53:33.775896088Z" level=error msg="Failed to destroy network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.776542 containerd[1810]: time="2025-01-15T12:53:33.776513528Z" level=error msg="encountered an error cleaning up failed sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.776901 containerd[1810]: time="2025-01-15T12:53:33.776623048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f9cb696-hwrgm,Uid:9197b20e-feec-468b-98f4-a4ecccedcf24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.776998 kubelet[3479]: E0115 12:53:33.776859 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.776998 kubelet[3479]: E0115 12:53:33.776918 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" Jan 15 12:53:33.776998 kubelet[3479]: E0115 12:53:33.776939 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" Jan 15 12:53:33.777386 kubelet[3479]: E0115 12:53:33.776993 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-666f9cb696-hwrgm_calico-system(9197b20e-feec-468b-98f4-a4ecccedcf24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-666f9cb696-hwrgm_calico-system(9197b20e-feec-468b-98f4-a4ecccedcf24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" podUID="9197b20e-feec-468b-98f4-a4ecccedcf24" Jan 15 12:53:33.892290 containerd[1810]: time="2025-01-15T12:53:33.891845094Z" level=error msg="Failed to destroy network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.892290 containerd[1810]: time="2025-01-15T12:53:33.892220854Z" level=error msg="encountered an error cleaning up failed sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.893166 containerd[1810]: time="2025-01-15T12:53:33.892273814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ztv7l,Uid:d4339fa4-8d81-4248-bba4-5dd6857b8e52,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.893166 containerd[1810]: time="2025-01-15T12:53:33.892942854Z" level=error msg="Failed to destroy network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.893428 kubelet[3479]: E0115 12:53:33.893397 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.893484 kubelet[3479]: E0115 12:53:33.893456 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ztv7l" Jan 15 12:53:33.893484 kubelet[3479]: E0115 12:53:33.893480 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ztv7l" Jan 15 12:53:33.893693 kubelet[3479]: E0115 12:53:33.893536 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ztv7l_kube-system(d4339fa4-8d81-4248-bba4-5dd6857b8e52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ztv7l_kube-system(d4339fa4-8d81-4248-bba4-5dd6857b8e52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ztv7l" podUID="d4339fa4-8d81-4248-bba4-5dd6857b8e52" Jan 15 12:53:33.896067 containerd[1810]: time="2025-01-15T12:53:33.896013373Z" level=error msg="encountered an error cleaning up failed sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.897731 containerd[1810]: time="2025-01-15T12:53:33.896084893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-frkvk,Uid:23ead334-e963-4418-9ca2-7ff1cba5daa6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.897832 kubelet[3479]: E0115 12:53:33.896366 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.897832 kubelet[3479]: E0115 12:53:33.896422 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-frkvk" Jan 15 12:53:33.897832 kubelet[3479]: E0115 12:53:33.896441 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-frkvk" Jan 15 12:53:33.897917 kubelet[3479]: E0115 12:53:33.897105 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-frkvk_kube-system(23ead334-e963-4418-9ca2-7ff1cba5daa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-frkvk_kube-system(23ead334-e963-4418-9ca2-7ff1cba5daa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-frkvk" podUID="23ead334-e963-4418-9ca2-7ff1cba5daa6" Jan 15 12:53:33.899422 containerd[1810]: time="2025-01-15T12:53:33.899383772Z" level=error msg="Failed to destroy network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.901204 containerd[1810]: time="2025-01-15T12:53:33.901148291Z" level=error msg="Failed to destroy network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.901546 containerd[1810]: time="2025-01-15T12:53:33.901516971Z" level=error msg="encountered an error cleaning up failed sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.901713 containerd[1810]: time="2025-01-15T12:53:33.901673251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9tjc,Uid:2e31c47b-ec08-47f1-903e-14f9c9ca8a9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.901863 containerd[1810]: time="2025-01-15T12:53:33.901606491Z" level=error msg="encountered an error cleaning up failed sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.902135 containerd[1810]: time="2025-01-15T12:53:33.902095771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-sq7jw,Uid:853a9ed3-353b-4e0d-99d5-673d9014d6e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.902773 kubelet[3479]: E0115 12:53:33.902577 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.902921 kubelet[3479]: E0115 12:53:33.902908 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" Jan 15 12:53:33.903001 kubelet[3479]: E0115 12:53:33.902992 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" Jan 15 12:53:33.903114 kubelet[3479]: E0115 12:53:33.903102 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b49cb6669-sq7jw_calico-apiserver(853a9ed3-353b-4e0d-99d5-673d9014d6e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b49cb6669-sq7jw_calico-apiserver(853a9ed3-353b-4e0d-99d5-673d9014d6e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" podUID="853a9ed3-353b-4e0d-99d5-673d9014d6e9" Jan 15 12:53:33.903341 kubelet[3479]: E0115 12:53:33.902755 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.903962 kubelet[3479]: E0115 12:53:33.903941 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:33.905311 kubelet[3479]: E0115 12:53:33.904680 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9tjc" Jan 15 12:53:33.905653 kubelet[3479]: E0115 12:53:33.905635 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z9tjc_calico-system(2e31c47b-ec08-47f1-903e-14f9c9ca8a9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z9tjc_calico-system(2e31c47b-ec08-47f1-903e-14f9c9ca8a9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:33.912835 containerd[1810]: time="2025-01-15T12:53:33.912781688Z" level=error msg="Failed to destroy network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.913342 containerd[1810]: time="2025-01-15T12:53:33.913306208Z" level=error msg="encountered an error cleaning up failed sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.913393 containerd[1810]: time="2025-01-15T12:53:33.913377128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-qcv6d,Uid:b34cd598-6eb1-42df-917d-effcdfc5a29b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.913628 kubelet[3479]: E0115 12:53:33.913610 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:33.913858 kubelet[3479]: E0115 12:53:33.913743 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" Jan 15 12:53:33.913858 kubelet[3479]: E0115 12:53:33.913767 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" Jan 15 12:53:33.913858 kubelet[3479]: E0115 12:53:33.913834 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b49cb6669-qcv6d_calico-apiserver(b34cd598-6eb1-42df-917d-effcdfc5a29b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b49cb6669-qcv6d_calico-apiserver(b34cd598-6eb1-42df-917d-effcdfc5a29b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" podUID="b34cd598-6eb1-42df-917d-effcdfc5a29b" Jan 15 12:53:34.009615 kubelet[3479]: I0115 12:53:34.009567 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:34.011837 containerd[1810]: time="2025-01-15T12:53:34.011703899Z" level=info msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" Jan 15 12:53:34.012429 kubelet[3479]: I0115 12:53:34.012273 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:34.013168 containerd[1810]: time="2025-01-15T12:53:34.012857538Z" level=info msg="Ensure that sandbox c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279 in task-service has been cleanup successfully" Jan 15 12:53:34.013168 containerd[1810]: time="2025-01-15T12:53:34.013126498Z" level=info msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" Jan 15 12:53:34.013531 containerd[1810]: time="2025-01-15T12:53:34.013426298Z" level=info msg="Ensure that sandbox 048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41 in task-service has been cleanup successfully" Jan 15 12:53:34.016295 kubelet[3479]: I0115 12:53:34.016217 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:34.018133 containerd[1810]: time="2025-01-15T12:53:34.017994697Z" level=info msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" Jan 15 12:53:34.018396 containerd[1810]: time="2025-01-15T12:53:34.018374657Z" level=info msg="Ensure that sandbox 4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b in task-service has been cleanup successfully" Jan 15 12:53:34.020759 kubelet[3479]: I0115 12:53:34.020728 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:34.024001 containerd[1810]: time="2025-01-15T12:53:34.023833375Z" level=info msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" Jan 15 12:53:34.024341 containerd[1810]: time="2025-01-15T12:53:34.024056895Z" level=info msg="Ensure that sandbox 8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c in task-service has been cleanup successfully" Jan 15 12:53:34.028088 kubelet[3479]: I0115 12:53:34.027574 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:34.028789 containerd[1810]: time="2025-01-15T12:53:34.028734214Z" level=info msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" Jan 15 12:53:34.029411 containerd[1810]: time="2025-01-15T12:53:34.029385374Z" level=info msg="Ensure that sandbox 73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b in task-service has been cleanup successfully" Jan 15 12:53:34.042331 containerd[1810]: time="2025-01-15T12:53:34.042073650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 12:53:34.054092 kubelet[3479]: I0115 12:53:34.053442 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:34.058579 containerd[1810]: time="2025-01-15T12:53:34.058535405Z" level=info msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" Jan 15 12:53:34.059010 containerd[1810]: time="2025-01-15T12:53:34.058973165Z" level=info msg="Ensure that sandbox bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873 in task-service has been cleanup successfully" Jan 15 12:53:34.108785 containerd[1810]: time="2025-01-15T12:53:34.108667190Z" level=error msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" failed" error="failed to destroy network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.108959 containerd[1810]: time="2025-01-15T12:53:34.108733310Z" level=error msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" failed" error="failed to destroy network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.110506 kubelet[3479]: E0115 12:53:34.109133 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:34.110506 kubelet[3479]: E0115 12:53:34.110340 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b"} Jan 15 12:53:34.110506 kubelet[3479]: E0115 12:53:34.110389 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b34cd598-6eb1-42df-917d-effcdfc5a29b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.110506 kubelet[3479]: E0115 12:53:34.110419 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b34cd598-6eb1-42df-917d-effcdfc5a29b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" podUID="b34cd598-6eb1-42df-917d-effcdfc5a29b" Jan 15 12:53:34.110739 kubelet[3479]: E0115 12:53:34.109133 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:34.110739 kubelet[3479]: E0115 12:53:34.110441 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41"} Jan 15 12:53:34.110739 kubelet[3479]: E0115 12:53:34.110465 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4339fa4-8d81-4248-bba4-5dd6857b8e52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.110739 kubelet[3479]: E0115 12:53:34.110485 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4339fa4-8d81-4248-bba4-5dd6857b8e52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ztv7l" podUID="d4339fa4-8d81-4248-bba4-5dd6857b8e52" Jan 15 12:53:34.124291 containerd[1810]: time="2025-01-15T12:53:34.124237906Z" level=error msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" failed" error="failed to destroy network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.124675 kubelet[3479]: E0115 12:53:34.124640 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:34.124741 kubelet[3479]: E0115 12:53:34.124714 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b"} Jan 15 12:53:34.124766 kubelet[3479]: E0115 12:53:34.124753 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"853a9ed3-353b-4e0d-99d5-673d9014d6e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.124905 kubelet[3479]: E0115 12:53:34.124885 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"853a9ed3-353b-4e0d-99d5-673d9014d6e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" podUID="853a9ed3-353b-4e0d-99d5-673d9014d6e9" Jan 15 12:53:34.126647 containerd[1810]: time="2025-01-15T12:53:34.126596825Z" level=error msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" failed" error="failed to destroy network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.127021 kubelet[3479]: E0115 12:53:34.126839 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:34.127021 kubelet[3479]: E0115 12:53:34.126877 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279"} Jan 15 12:53:34.127021 kubelet[3479]: E0115 12:53:34.126909 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.127021 kubelet[3479]: E0115 12:53:34.126941 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9tjc" podUID="2e31c47b-ec08-47f1-903e-14f9c9ca8a9b" Jan 15 12:53:34.134486 containerd[1810]: time="2025-01-15T12:53:34.133968423Z" level=error msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" failed" error="failed to destroy network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.134611 kubelet[3479]: E0115 12:53:34.134307 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:34.134611 kubelet[3479]: E0115 12:53:34.134355 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873"} Jan 15 12:53:34.134611 kubelet[3479]: E0115 12:53:34.134405 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23ead334-e963-4418-9ca2-7ff1cba5daa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.134611 kubelet[3479]: E0115 12:53:34.134435 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23ead334-e963-4418-9ca2-7ff1cba5daa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-frkvk" podUID="23ead334-e963-4418-9ca2-7ff1cba5daa6" Jan 15 12:53:34.136632 containerd[1810]: time="2025-01-15T12:53:34.136552422Z" level=error msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" failed" error="failed to destroy network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:53:34.136821 kubelet[3479]: E0115 12:53:34.136794 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:34.136884 kubelet[3479]: E0115 12:53:34.136843 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c"} Jan 15 12:53:34.136884 kubelet[3479]: E0115 12:53:34.136881 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9197b20e-feec-468b-98f4-a4ecccedcf24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:53:34.136961 kubelet[3479]: E0115 12:53:34.136907 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9197b20e-feec-468b-98f4-a4ecccedcf24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" podUID="9197b20e-feec-468b-98f4-a4ecccedcf24" Jan 15 12:53:34.374468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873-shm.mount: Deactivated successfully. Jan 15 12:53:34.374608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b-shm.mount: Deactivated successfully. Jan 15 12:53:34.374686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c-shm.mount: Deactivated successfully. Jan 15 12:53:38.100117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970728915.mount: Deactivated successfully. Jan 15 12:53:38.429971 containerd[1810]: time="2025-01-15T12:53:38.429699278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:38.432673 containerd[1810]: time="2025-01-15T12:53:38.432609711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 15 12:53:38.436510 containerd[1810]: time="2025-01-15T12:53:38.436456103Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:38.441847 containerd[1810]: time="2025-01-15T12:53:38.441809971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:38.442732 containerd[1810]: time="2025-01-15T12:53:38.442322450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.3996404s" Jan 15 12:53:38.442732 containerd[1810]: time="2025-01-15T12:53:38.442359090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 15 12:53:38.457041 containerd[1810]: time="2025-01-15T12:53:38.457002377Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 12:53:38.514973 containerd[1810]: time="2025-01-15T12:53:38.514923248Z" level=info msg="CreateContainer within sandbox \"518f68b97b539a8c7bde273b6cbfca5f5a5424c3bb5ddeb7ce3ae7fc86653a9a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0bbcc4266aa31c35cdadfb8ac74a1fbb3f19011b16714802a820fd84b9d2aea8\"" Jan 15 12:53:38.516307 containerd[1810]: time="2025-01-15T12:53:38.516232566Z" level=info msg="StartContainer for \"0bbcc4266aa31c35cdadfb8ac74a1fbb3f19011b16714802a820fd84b9d2aea8\"" Jan 15 12:53:38.572733 containerd[1810]: time="2025-01-15T12:53:38.572684480Z" level=info msg="StartContainer for \"0bbcc4266aa31c35cdadfb8ac74a1fbb3f19011b16714802a820fd84b9d2aea8\" returns successfully" Jan 15 12:53:38.893645 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 12:53:38.893791 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 12:53:39.123176 systemd[1]: run-containerd-runc-k8s.io-0bbcc4266aa31c35cdadfb8ac74a1fbb3f19011b16714802a820fd84b9d2aea8-runc.4dHYSa.mount: Deactivated successfully. Jan 15 12:53:40.525223 kernel: bpftool[4740]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 12:53:41.404781 systemd-networkd[1381]: vxlan.calico: Link UP Jan 15 12:53:41.404794 systemd-networkd[1381]: vxlan.calico: Gained carrier Jan 15 12:53:42.493843 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Jan 15 12:53:45.865098 containerd[1810]: time="2025-01-15T12:53:45.865049216Z" level=info msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" Jan 15 12:53:45.867526 containerd[1810]: time="2025-01-15T12:53:45.865118576Z" level=info msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" Jan 15 12:53:45.867526 containerd[1810]: time="2025-01-15T12:53:45.865170696Z" level=info msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" Jan 15 12:53:45.979929 kubelet[3479]: I0115 12:53:45.979833 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ck4tx" podStartSLOduration=8.685470049 podStartE2EDuration="23.97978369s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:23.148258368 +0000 UTC m=+37.394737113" lastFinishedPulling="2025-01-15 12:53:38.442572009 +0000 UTC m=+52.689050754" observedRunningTime="2025-01-15 12:53:39.117959508 +0000 UTC m=+53.364438253" watchObservedRunningTime="2025-01-15 12:53:45.97978369 +0000 UTC m=+60.226262435" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.986 [INFO][4855] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.986 [INFO][4855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" iface="eth0" netns="/var/run/netns/cni-f9d334c1-afd3-0421-2497-6c8443a0f7d9" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" iface="eth0" netns="/var/run/netns/cni-f9d334c1-afd3-0421-2497-6c8443a0f7d9" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" iface="eth0" netns="/var/run/netns/cni-f9d334c1-afd3-0421-2497-6c8443a0f7d9" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4855] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.016 [INFO][4878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.016 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.016 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.027 [WARNING][4878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.028 [INFO][4878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.030 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.038268 containerd[1810]: 2025-01-15 12:53:46.033 [INFO][4855] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:53:46.039639 containerd[1810]: time="2025-01-15T12:53:46.038805483Z" level=info msg="TearDown network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" successfully" Jan 15 12:53:46.039639 containerd[1810]: time="2025-01-15T12:53:46.039538562Z" level=info msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" returns successfully" Jan 15 12:53:46.042796 systemd[1]: run-netns-cni\x2df9d334c1\x2dafd3\x2d0421\x2d2497\x2d6c8443a0f7d9.mount: Deactivated successfully. Jan 15 12:53:46.051267 containerd[1810]: time="2025-01-15T12:53:46.050557698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-qcv6d,Uid:b34cd598-6eb1-42df-917d-effcdfc5a29b,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.978 [INFO][4854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.978 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" iface="eth0" netns="/var/run/netns/cni-e99b666e-81af-fac0-aa78-d1b992dcab42" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.979 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" iface="eth0" netns="/var/run/netns/cni-e99b666e-81af-fac0-aa78-d1b992dcab42" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.980 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" iface="eth0" netns="/var/run/netns/cni-e99b666e-81af-fac0-aa78-d1b992dcab42" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.980 [INFO][4854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:45.980 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.028 [INFO][4873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.029 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.030 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.047 [WARNING][4873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.047 [INFO][4873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.048 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.054913 containerd[1810]: 2025-01-15 12:53:46.053 [INFO][4854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:53:46.056237 containerd[1810]: time="2025-01-15T12:53:46.055328648Z" level=info msg="TearDown network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" successfully" Jan 15 12:53:46.056237 containerd[1810]: time="2025-01-15T12:53:46.055353408Z" level=info msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" returns successfully" Jan 15 12:53:46.060648 systemd[1]: run-netns-cni\x2de99b666e\x2d81af\x2dfac0\x2daa78\x2dd1b992dcab42.mount: Deactivated successfully. Jan 15 12:53:46.065227 containerd[1810]: time="2025-01-15T12:53:46.064853347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9tjc,Uid:2e31c47b-ec08-47f1-903e-14f9c9ca8a9b,Namespace:calico-system,Attempt:1,}" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.985 [INFO][4853] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.986 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" iface="eth0" netns="/var/run/netns/cni-1e0cc5b3-7e84-8cce-f646-1b4fddf2463e" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" iface="eth0" netns="/var/run/netns/cni-1e0cc5b3-7e84-8cce-f646-1b4fddf2463e" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" iface="eth0" netns="/var/run/netns/cni-1e0cc5b3-7e84-8cce-f646-1b4fddf2463e" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4853] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:45.987 [INFO][4853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.041 [INFO][4877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.042 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.049 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.062 [WARNING][4877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.062 [INFO][4877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.064 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.072040 containerd[1810]: 2025-01-15 12:53:46.068 [INFO][4853] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:53:46.072436 containerd[1810]: time="2025-01-15T12:53:46.072171292Z" level=info msg="TearDown network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" successfully" Jan 15 12:53:46.072436 containerd[1810]: time="2025-01-15T12:53:46.072215772Z" level=info msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" returns successfully" Jan 15 12:53:46.072977 containerd[1810]: time="2025-01-15T12:53:46.072924010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ztv7l,Uid:d4339fa4-8d81-4248-bba4-5dd6857b8e52,Namespace:kube-system,Attempt:1,}" Jan 15 12:53:46.076535 systemd[1]: run-netns-cni\x2d1e0cc5b3\x2d7e84\x2d8cce\x2df646\x2d1b4fddf2463e.mount: Deactivated successfully. Jan 15 12:53:46.347917 systemd-networkd[1381]: cali7cd56c571cb: Link UP Jan 15 12:53:46.348153 systemd-networkd[1381]: cali7cd56c571cb: Gained carrier Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.201 [INFO][4892] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0 calico-apiserver-6b49cb6669- calico-apiserver b34cd598-6eb1-42df-917d-effcdfc5a29b 793 0 2025-01-15 12:53:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b49cb6669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed calico-apiserver-6b49cb6669-qcv6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cd56c571cb [] []}} ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.201 [INFO][4892] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.260 [INFO][4923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" HandleID="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.278 [INFO][4923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" HandleID="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"calico-apiserver-6b49cb6669-qcv6d", "timestamp":"2025-01-15 12:53:46.260563367 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.278 [INFO][4923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.278 [INFO][4923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.278 [INFO][4923] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.283 [INFO][4923] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.290 [INFO][4923] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.296 [INFO][4923] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.309 [INFO][4923] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.313 [INFO][4923] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.313 [INFO][4923] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.315 [INFO][4923] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034 Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.329 [INFO][4923] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.335 [INFO][4923] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.1/26] block=192.168.72.0/26 handle="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.335 [INFO][4923] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.1/26] handle="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.337 [INFO][4923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.374223 containerd[1810]: 2025-01-15 12:53:46.337 [INFO][4923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.1/26] IPv6=[] ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" HandleID="k8s-pod-network.4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.342 [INFO][4892] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34cd598-6eb1-42df-917d-effcdfc5a29b", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"calico-apiserver-6b49cb6669-qcv6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd56c571cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.343 [INFO][4892] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.1/32] ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.343 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cd56c571cb ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.346 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.348 [INFO][4892] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34cd598-6eb1-42df-917d-effcdfc5a29b", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034", Pod:"calico-apiserver-6b49cb6669-qcv6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd56c571cb", MAC:"62:34:05:99:4e:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.375378 containerd[1810]: 2025-01-15 12:53:46.366 [INFO][4892] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-qcv6d" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:53:46.413035 systemd-networkd[1381]: cali0c0f19a9e2d: Link UP Jan 15 12:53:46.414885 systemd-networkd[1381]: cali0c0f19a9e2d: Gained carrier Jan 15 12:53:46.441419 containerd[1810]: time="2025-01-15T12:53:46.441225460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:46.441419 containerd[1810]: time="2025-01-15T12:53:46.441328739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:46.442854 containerd[1810]: time="2025-01-15T12:53:46.441344419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.222 [INFO][4901] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0 csi-node-driver- calico-system 2e31c47b-ec08-47f1-903e-14f9c9ca8a9b 792 0 2025-01-15 12:53:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed csi-node-driver-z9tjc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0c0f19a9e2d [] []}} ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.222 [INFO][4901] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.292 [INFO][4928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" HandleID="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.310 [INFO][4928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" HandleID="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fdde0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"csi-node-driver-z9tjc", "timestamp":"2025-01-15 12:53:46.292452859 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.310 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.336 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.336 [INFO][4928] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.339 [INFO][4928] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.343 [INFO][4928] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.356 [INFO][4928] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.359 [INFO][4928] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.368 [INFO][4928] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.369 [INFO][4928] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.370 [INFO][4928] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224 Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.383 [INFO][4928] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.397 [INFO][4928] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.2/26] block=192.168.72.0/26 handle="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.398 [INFO][4928] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.2/26] handle="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.398 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.447637 containerd[1810]: 2025-01-15 12:53:46.398 [INFO][4928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.2/26] IPv6=[] ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" HandleID="k8s-pod-network.f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.404 [INFO][4901] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"csi-node-driver-z9tjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c0f19a9e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.404 [INFO][4901] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.2/32] ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.405 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c0f19a9e2d ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.416 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.416 [INFO][4901] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224", Pod:"csi-node-driver-z9tjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c0f19a9e2d", MAC:"62:d6:44:8f:56:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.448740 containerd[1810]: 2025-01-15 12:53:46.440 [INFO][4901] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224" Namespace="calico-system" Pod="csi-node-driver-z9tjc" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:53:46.448740 containerd[1810]: time="2025-01-15T12:53:46.448028125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.494517 systemd-networkd[1381]: cali535a8b3f439: Link UP Jan 15 12:53:46.495984 systemd-networkd[1381]: cali535a8b3f439: Gained carrier Jan 15 12:53:46.505057 containerd[1810]: time="2025-01-15T12:53:46.504721123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:46.505057 containerd[1810]: time="2025-01-15T12:53:46.504777363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:46.505057 containerd[1810]: time="2025-01-15T12:53:46.504792603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.505057 containerd[1810]: time="2025-01-15T12:53:46.504868403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.277 [INFO][4914] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0 coredns-76f75df574- kube-system d4339fa4-8d81-4248-bba4-5dd6857b8e52 794 0 2025-01-15 12:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed coredns-76f75df574-ztv7l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali535a8b3f439 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.278 [INFO][4914] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.320 [INFO][4938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" HandleID="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.337 [INFO][4938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" HandleID="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"coredns-76f75df574-ztv7l", "timestamp":"2025-01-15 12:53:46.320285719 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.338 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.398 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.399 [INFO][4938] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.405 [INFO][4938] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.416 [INFO][4938] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.427 [INFO][4938] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.434 [INFO][4938] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.442 [INFO][4938] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.442 [INFO][4938] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.447 [INFO][4938] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5 Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.465 [INFO][4938] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.480 [INFO][4938] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.3/26] block=192.168.72.0/26 handle="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.480 [INFO][4938] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.3/26] handle="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.480 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.520925 containerd[1810]: 2025-01-15 12:53:46.480 [INFO][4938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.3/26] IPv6=[] ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" HandleID="k8s-pod-network.32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.483 [INFO][4914] cni-plugin/k8s.go 386: Populated endpoint ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d4339fa4-8d81-4248-bba4-5dd6857b8e52", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"coredns-76f75df574-ztv7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535a8b3f439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.483 [INFO][4914] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.3/32] ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.483 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535a8b3f439 ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.496 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.497 [INFO][4914] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d4339fa4-8d81-4248-bba4-5dd6857b8e52", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5", Pod:"coredns-76f75df574-ztv7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535a8b3f439", MAC:"f2:ed:b6:f0:ff:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:46.521874 containerd[1810]: 2025-01-15 12:53:46.517 [INFO][4914] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5" Namespace="kube-system" Pod="coredns-76f75df574-ztv7l" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:53:46.561696 containerd[1810]: time="2025-01-15T12:53:46.561565481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:46.562128 containerd[1810]: time="2025-01-15T12:53:46.561666281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:46.562128 containerd[1810]: time="2025-01-15T12:53:46.561682201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.562128 containerd[1810]: time="2025-01-15T12:53:46.561922841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:46.571940 containerd[1810]: time="2025-01-15T12:53:46.571879219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-qcv6d,Uid:b34cd598-6eb1-42df-917d-effcdfc5a29b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034\"" Jan 15 12:53:46.572693 containerd[1810]: time="2025-01-15T12:53:46.572603898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9tjc,Uid:2e31c47b-ec08-47f1-903e-14f9c9ca8a9b,Namespace:calico-system,Attempt:1,} returns sandbox id \"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224\"" Jan 15 12:53:46.575480 containerd[1810]: time="2025-01-15T12:53:46.575457572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 12:53:46.617891 containerd[1810]: time="2025-01-15T12:53:46.617750361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ztv7l,Uid:d4339fa4-8d81-4248-bba4-5dd6857b8e52,Namespace:kube-system,Attempt:1,} returns sandbox id \"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5\"" Jan 15 12:53:46.622774 containerd[1810]: time="2025-01-15T12:53:46.622655590Z" level=info msg="CreateContainer within sandbox \"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:53:46.662228 containerd[1810]: time="2025-01-15T12:53:46.662003906Z" level=info msg="CreateContainer within sandbox \"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b03e3d7c5bd9e42c44b235858e995992017e1dbb59a63395fbf2d55edde44559\"" Jan 15 12:53:46.665471 containerd[1810]: time="2025-01-15T12:53:46.663249823Z" level=info msg="StartContainer for \"b03e3d7c5bd9e42c44b235858e995992017e1dbb59a63395fbf2d55edde44559\"" Jan 15 12:53:46.716928 containerd[1810]: time="2025-01-15T12:53:46.716465189Z" level=info msg="StartContainer for \"b03e3d7c5bd9e42c44b235858e995992017e1dbb59a63395fbf2d55edde44559\" returns successfully" Jan 15 12:53:46.865402 containerd[1810]: time="2025-01-15T12:53:46.865021070Z" level=info msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.919 [INFO][5163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.919 [INFO][5163] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" iface="eth0" netns="/var/run/netns/cni-aa5b33e2-e316-ee31-2640-803ea62cb6d0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.920 [INFO][5163] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" iface="eth0" netns="/var/run/netns/cni-aa5b33e2-e316-ee31-2640-803ea62cb6d0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.920 [INFO][5163] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" iface="eth0" netns="/var/run/netns/cni-aa5b33e2-e316-ee31-2640-803ea62cb6d0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.920 [INFO][5163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.920 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.942 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.942 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.942 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.950 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.950 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.952 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:46.955000 containerd[1810]: 2025-01-15 12:53:46.953 [INFO][5163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:53:46.955668 containerd[1810]: time="2025-01-15T12:53:46.955113877Z" level=info msg="TearDown network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" successfully" Jan 15 12:53:46.955668 containerd[1810]: time="2025-01-15T12:53:46.955138357Z" level=info msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" returns successfully" Jan 15 12:53:46.956370 containerd[1810]: time="2025-01-15T12:53:46.955973995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f9cb696-hwrgm,Uid:9197b20e-feec-468b-98f4-a4ecccedcf24,Namespace:calico-system,Attempt:1,}" Jan 15 12:53:47.049776 systemd[1]: run-netns-cni\x2daa5b33e2\x2de316\x2dee31\x2d2640\x2d803ea62cb6d0.mount: Deactivated successfully. Jan 15 12:53:47.116723 systemd-networkd[1381]: cali1752097df50: Link UP Jan 15 12:53:47.116914 systemd-networkd[1381]: cali1752097df50: Gained carrier Jan 15 12:53:47.127576 kubelet[3479]: I0115 12:53:47.127533 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ztv7l" podStartSLOduration=47.127492107 podStartE2EDuration="47.127492107s" podCreationTimestamp="2025-01-15 12:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:53:47.126575669 +0000 UTC m=+61.373054414" watchObservedRunningTime="2025-01-15 12:53:47.127492107 +0000 UTC m=+61.373970812" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.029 [INFO][5180] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0 calico-kube-controllers-666f9cb696- calico-system 9197b20e-feec-468b-98f4-a4ecccedcf24 813 0 2025-01-15 12:53:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:666f9cb696 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed calico-kube-controllers-666f9cb696-hwrgm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1752097df50 [] []}} ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.029 [INFO][5180] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.067 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" HandleID="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.077 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" HandleID="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002234b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"calico-kube-controllers-666f9cb696-hwrgm", "timestamp":"2025-01-15 12:53:47.066996516 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.077 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.077 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.077 [INFO][5191] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.079 [INFO][5191] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.082 [INFO][5191] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.086 [INFO][5191] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.087 [INFO][5191] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.089 [INFO][5191] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.089 [INFO][5191] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.090 [INFO][5191] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.095 [INFO][5191] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.105 [INFO][5191] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.4/26] block=192.168.72.0/26 handle="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.106 [INFO][5191] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.4/26] handle="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.106 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:47.140955 containerd[1810]: 2025-01-15 12:53:47.106 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.4/26] IPv6=[] ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" HandleID="k8s-pod-network.3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.110 [INFO][5180] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0", GenerateName:"calico-kube-controllers-666f9cb696-", Namespace:"calico-system", SelfLink:"", UID:"9197b20e-feec-468b-98f4-a4ecccedcf24", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f9cb696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"calico-kube-controllers-666f9cb696-hwrgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1752097df50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.111 [INFO][5180] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.4/32] ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.112 [INFO][5180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1752097df50 ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.116 [INFO][5180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.117 [INFO][5180] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0", GenerateName:"calico-kube-controllers-666f9cb696-", Namespace:"calico-system", SelfLink:"", UID:"9197b20e-feec-468b-98f4-a4ecccedcf24", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f9cb696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af", Pod:"calico-kube-controllers-666f9cb696-hwrgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1752097df50", MAC:"5e:e4:91:a7:5a:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:47.142982 containerd[1810]: 2025-01-15 12:53:47.134 [INFO][5180] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af" Namespace="calico-system" Pod="calico-kube-controllers-666f9cb696-hwrgm" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:53:47.179381 containerd[1810]: time="2025-01-15T12:53:47.179287715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:47.179381 containerd[1810]: time="2025-01-15T12:53:47.179342115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:47.179381 containerd[1810]: time="2025-01-15T12:53:47.179352795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:47.179637 containerd[1810]: time="2025-01-15T12:53:47.179431275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:47.230946 containerd[1810]: time="2025-01-15T12:53:47.230901605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f9cb696-hwrgm,Uid:9197b20e-feec-468b-98f4-a4ecccedcf24,Namespace:calico-system,Attempt:1,} returns sandbox id \"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af\"" Jan 15 12:53:47.776682 containerd[1810]: time="2025-01-15T12:53:47.776478714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:47.779323 containerd[1810]: time="2025-01-15T12:53:47.779282508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 15 12:53:47.783156 containerd[1810]: time="2025-01-15T12:53:47.783107299Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:47.787648 containerd[1810]: time="2025-01-15T12:53:47.787600930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:47.788516 containerd[1810]: time="2025-01-15T12:53:47.788404848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.212891597s" Jan 15 12:53:47.788516 containerd[1810]: time="2025-01-15T12:53:47.788435288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 15 12:53:47.789821 containerd[1810]: time="2025-01-15T12:53:47.789788565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:53:47.791288 containerd[1810]: time="2025-01-15T12:53:47.791239242Z" level=info msg="CreateContainer within sandbox \"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 12:53:47.826605 containerd[1810]: time="2025-01-15T12:53:47.826543886Z" level=info msg="CreateContainer within sandbox \"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"044d4dd506ce329515fc6c01c56b08eedf7fc1c67466cb9352fb7d69397beee6\"" Jan 15 12:53:47.827314 containerd[1810]: time="2025-01-15T12:53:47.827285045Z" level=info msg="StartContainer for \"044d4dd506ce329515fc6c01c56b08eedf7fc1c67466cb9352fb7d69397beee6\"" Jan 15 12:53:47.866033 containerd[1810]: time="2025-01-15T12:53:47.865979962Z" level=info msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" Jan 15 12:53:47.911866 containerd[1810]: time="2025-01-15T12:53:47.911813943Z" level=info msg="StartContainer for \"044d4dd506ce329515fc6c01c56b08eedf7fc1c67466cb9352fb7d69397beee6\" returns successfully" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.926 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.927 [INFO][5300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" iface="eth0" netns="/var/run/netns/cni-5d65e98f-f388-721e-8e5a-7973044af39f" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.928 [INFO][5300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" iface="eth0" netns="/var/run/netns/cni-5d65e98f-f388-721e-8e5a-7973044af39f" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.928 [INFO][5300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" iface="eth0" netns="/var/run/netns/cni-5d65e98f-f388-721e-8e5a-7973044af39f" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.928 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.928 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.946 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.946 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.946 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.954 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.954 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.956 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:47.958560 containerd[1810]: 2025-01-15 12:53:47.957 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:53:47.959020 containerd[1810]: time="2025-01-15T12:53:47.958726443Z" level=info msg="TearDown network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" successfully" Jan 15 12:53:47.959020 containerd[1810]: time="2025-01-15T12:53:47.958753402Z" level=info msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" returns successfully" Jan 15 12:53:47.960176 containerd[1810]: time="2025-01-15T12:53:47.959720720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-sq7jw,Uid:853a9ed3-353b-4e0d-99d5-673d9014d6e9,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:53:48.044977 systemd[1]: run-netns-cni\x2d5d65e98f\x2df388\x2d721e\x2d8e5a\x2d7973044af39f.mount: Deactivated successfully. Jan 15 12:53:48.122996 systemd-networkd[1381]: cali88f9a69c6b6: Link UP Jan 15 12:53:48.124403 systemd-networkd[1381]: cali88f9a69c6b6: Gained carrier Jan 15 12:53:48.126999 systemd-networkd[1381]: cali7cd56c571cb: Gained IPv6LL Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.047 [INFO][5322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0 calico-apiserver-6b49cb6669- calico-apiserver 853a9ed3-353b-4e0d-99d5-673d9014d6e9 831 0 2025-01-15 12:53:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b49cb6669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed calico-apiserver-6b49cb6669-sq7jw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88f9a69c6b6 [] []}} ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.047 [INFO][5322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.077 [INFO][5333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" HandleID="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.087 [INFO][5333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" HandleID="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"calico-apiserver-6b49cb6669-sq7jw", "timestamp":"2025-01-15 12:53:48.077405428 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.087 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.087 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.087 [INFO][5333] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.089 [INFO][5333] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.092 [INFO][5333] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.096 [INFO][5333] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.097 [INFO][5333] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.100 [INFO][5333] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.100 [INFO][5333] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.101 [INFO][5333] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.108 [INFO][5333] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.115 [INFO][5333] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.5/26] block=192.168.72.0/26 handle="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.115 [INFO][5333] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.5/26] handle="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.115 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:48.149612 containerd[1810]: 2025-01-15 12:53:48.115 [INFO][5333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.5/26] IPv6=[] ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" HandleID="k8s-pod-network.841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.118 [INFO][5322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"853a9ed3-353b-4e0d-99d5-673d9014d6e9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"calico-apiserver-6b49cb6669-sq7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88f9a69c6b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.118 [INFO][5322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.5/32] ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.119 [INFO][5322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88f9a69c6b6 ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.125 [INFO][5322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.125 [INFO][5322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"853a9ed3-353b-4e0d-99d5-673d9014d6e9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b", Pod:"calico-apiserver-6b49cb6669-sq7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88f9a69c6b6", MAC:"ae:0c:ed:8e:a6:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:48.150176 containerd[1810]: 2025-01-15 12:53:48.145 [INFO][5322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b" Namespace="calico-apiserver" Pod="calico-apiserver-6b49cb6669-sq7jw" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:53:48.175523 containerd[1810]: time="2025-01-15T12:53:48.175392857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:48.176086 containerd[1810]: time="2025-01-15T12:53:48.175563737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:48.176086 containerd[1810]: time="2025-01-15T12:53:48.175999816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:48.176350 containerd[1810]: time="2025-01-15T12:53:48.176160976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:48.221066 containerd[1810]: time="2025-01-15T12:53:48.221011320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b49cb6669-sq7jw,Uid:853a9ed3-353b-4e0d-99d5-673d9014d6e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b\"" Jan 15 12:53:48.253430 systemd-networkd[1381]: cali0c0f19a9e2d: Gained IPv6LL Jan 15 12:53:48.381380 systemd-networkd[1381]: cali535a8b3f439: Gained IPv6LL Jan 15 12:53:48.863591 containerd[1810]: time="2025-01-15T12:53:48.863520461Z" level=info msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.914 [INFO][5406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.914 [INFO][5406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" iface="eth0" netns="/var/run/netns/cni-c527d25b-03f2-834e-4e7c-8465f8cf4288" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.915 [INFO][5406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" iface="eth0" netns="/var/run/netns/cni-c527d25b-03f2-834e-4e7c-8465f8cf4288" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.915 [INFO][5406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" iface="eth0" netns="/var/run/netns/cni-c527d25b-03f2-834e-4e7c-8465f8cf4288" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.915 [INFO][5406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.915 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.938 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.938 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.938 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.947 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.947 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.948 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:48.951592 containerd[1810]: 2025-01-15 12:53:48.950 [INFO][5406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:53:48.954115 systemd[1]: run-netns-cni\x2dc527d25b\x2d03f2\x2d834e\x2d4e7c\x2d8465f8cf4288.mount: Deactivated successfully. Jan 15 12:53:48.955330 containerd[1810]: time="2025-01-15T12:53:48.954402585Z" level=info msg="TearDown network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" successfully" Jan 15 12:53:48.955330 containerd[1810]: time="2025-01-15T12:53:48.954452625Z" level=info msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" returns successfully" Jan 15 12:53:48.955330 containerd[1810]: time="2025-01-15T12:53:48.955077464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-frkvk,Uid:23ead334-e963-4418-9ca2-7ff1cba5daa6,Namespace:kube-system,Attempt:1,}" Jan 15 12:53:49.021382 systemd-networkd[1381]: cali1752097df50: Gained IPv6LL Jan 15 12:53:49.127838 systemd-networkd[1381]: cali0f639211327: Link UP Jan 15 12:53:49.129187 systemd-networkd[1381]: cali0f639211327: Gained carrier Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.059 [INFO][5419] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0 coredns-76f75df574- kube-system 23ead334-e963-4418-9ca2-7ff1cba5daa6 842 0 2025-01-15 12:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b64d8040ed coredns-76f75df574-frkvk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0f639211327 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.060 [INFO][5419] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.086 [INFO][5430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" HandleID="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.096 [INFO][5430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" HandleID="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b64d8040ed", "pod":"coredns-76f75df574-frkvk", "timestamp":"2025-01-15 12:53:49.086188463 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b64d8040ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.096 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.096 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.096 [INFO][5430] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b64d8040ed' Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.098 [INFO][5430] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.101 [INFO][5430] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.105 [INFO][5430] ipam/ipam.go 489: Trying affinity for 192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.106 [INFO][5430] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.108 [INFO][5430] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.108 [INFO][5430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.110 [INFO][5430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1 Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.114 [INFO][5430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.123 [INFO][5430] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.6/26] block=192.168.72.0/26 handle="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.123 [INFO][5430] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.6/26] handle="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" host="ci-4081.3.0-a-b64d8040ed" Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.123 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:53:49.145581 containerd[1810]: 2025-01-15 12:53:49.123 [INFO][5430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.6/26] IPv6=[] ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" HandleID="k8s-pod-network.39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.125 [INFO][5419] cni-plugin/k8s.go 386: Populated endpoint ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"23ead334-e963-4418-9ca2-7ff1cba5daa6", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"", Pod:"coredns-76f75df574-frkvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f639211327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.125 [INFO][5419] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.6/32] ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.125 [INFO][5419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f639211327 ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.128 [INFO][5419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.128 [INFO][5419] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"23ead334-e963-4418-9ca2-7ff1cba5daa6", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1", Pod:"coredns-76f75df574-frkvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f639211327", MAC:"e2:af:ef:77:11:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:53:49.148245 containerd[1810]: 2025-01-15 12:53:49.143 [INFO][5419] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1" Namespace="kube-system" Pod="coredns-76f75df574-frkvk" WorkloadEndpoint="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:53:49.168570 containerd[1810]: time="2025-01-15T12:53:49.168473966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:53:49.168570 containerd[1810]: time="2025-01-15T12:53:49.168535686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:53:49.168570 containerd[1810]: time="2025-01-15T12:53:49.168550646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:49.168801 containerd[1810]: time="2025-01-15T12:53:49.168637926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:53:49.197985 systemd[1]: run-containerd-runc-k8s.io-39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1-runc.UVJ2Wz.mount: Deactivated successfully. Jan 15 12:53:49.234832 containerd[1810]: time="2025-01-15T12:53:49.234764424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-frkvk,Uid:23ead334-e963-4418-9ca2-7ff1cba5daa6,Namespace:kube-system,Attempt:1,} returns sandbox id \"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1\"" Jan 15 12:53:49.267349 containerd[1810]: time="2025-01-15T12:53:49.267238314Z" level=info msg="CreateContainer within sandbox \"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:53:49.300231 containerd[1810]: time="2025-01-15T12:53:49.300170083Z" level=info msg="CreateContainer within sandbox \"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd968c99cf1a38fe645452fcaa4fca2166f96306d52d4cab25b4a301c04507c0\"" Jan 15 12:53:49.302649 containerd[1810]: time="2025-01-15T12:53:49.301400121Z" level=info msg="StartContainer for \"cd968c99cf1a38fe645452fcaa4fca2166f96306d52d4cab25b4a301c04507c0\"" Jan 15 12:53:49.405312 systemd-networkd[1381]: cali88f9a69c6b6: Gained IPv6LL Jan 15 12:53:49.518549 containerd[1810]: time="2025-01-15T12:53:49.518498672Z" level=info msg="StartContainer for \"cd968c99cf1a38fe645452fcaa4fca2166f96306d52d4cab25b4a301c04507c0\" returns successfully" Jan 15 12:53:50.063241 containerd[1810]: time="2025-01-15T12:53:50.062739945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:50.065217 containerd[1810]: time="2025-01-15T12:53:50.065094701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 15 12:53:50.068911 containerd[1810]: time="2025-01-15T12:53:50.068856294Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:50.073471 containerd[1810]: time="2025-01-15T12:53:50.073434646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:50.074160 containerd[1810]: time="2025-01-15T12:53:50.074013245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.28409196s" Jan 15 12:53:50.074160 containerd[1810]: time="2025-01-15T12:53:50.074051205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:53:50.077668 containerd[1810]: time="2025-01-15T12:53:50.076263081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 12:53:50.077668 containerd[1810]: time="2025-01-15T12:53:50.077146599Z" level=info msg="CreateContainer within sandbox \"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:53:50.143842 containerd[1810]: time="2025-01-15T12:53:50.143780801Z" level=info msg="CreateContainer within sandbox \"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d0737db503210b477498ec06270ed86c640a128bb346ed3a9ddac93ee0e5546\"" Jan 15 12:53:50.145203 containerd[1810]: time="2025-01-15T12:53:50.145097959Z" level=info msg="StartContainer for \"2d0737db503210b477498ec06270ed86c640a128bb346ed3a9ddac93ee0e5546\"" Jan 15 12:53:50.206234 kubelet[3479]: I0115 12:53:50.204647 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-frkvk" podStartSLOduration=50.204603493 podStartE2EDuration="50.204603493s" podCreationTimestamp="2025-01-15 12:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:53:50.18381205 +0000 UTC m=+64.430290835" watchObservedRunningTime="2025-01-15 12:53:50.204603493 +0000 UTC m=+64.451082198" Jan 15 12:53:50.247701 containerd[1810]: time="2025-01-15T12:53:50.247500537Z" level=info msg="StartContainer for \"2d0737db503210b477498ec06270ed86c640a128bb346ed3a9ddac93ee0e5546\" returns successfully" Jan 15 12:53:50.877337 systemd-networkd[1381]: cali0f639211327: Gained IPv6LL Jan 15 12:53:52.156760 kubelet[3479]: I0115 12:53:52.156569 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:53:52.881208 containerd[1810]: time="2025-01-15T12:53:52.880955256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:52.883224 containerd[1810]: time="2025-01-15T12:53:52.883097772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 15 12:53:52.888655 containerd[1810]: time="2025-01-15T12:53:52.888612322Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:52.905888 containerd[1810]: time="2025-01-15T12:53:52.905833612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:52.910639 containerd[1810]: time="2025-01-15T12:53:52.910574403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.834278642s" Jan 15 12:53:52.910639 containerd[1810]: time="2025-01-15T12:53:52.910623763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 15 12:53:52.912034 containerd[1810]: time="2025-01-15T12:53:52.911990001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 12:53:52.930015 containerd[1810]: time="2025-01-15T12:53:52.929959569Z" level=info msg="CreateContainer within sandbox \"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 12:53:52.970832 containerd[1810]: time="2025-01-15T12:53:52.970679536Z" level=info msg="CreateContainer within sandbox \"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"00c4e8ead5f63bee846fdda87ae7840766b62f49e606185a9780dd5fc5ea211f\"" Jan 15 12:53:52.971626 containerd[1810]: time="2025-01-15T12:53:52.971546775Z" level=info msg="StartContainer for \"00c4e8ead5f63bee846fdda87ae7840766b62f49e606185a9780dd5fc5ea211f\"" Jan 15 12:53:53.038991 containerd[1810]: time="2025-01-15T12:53:53.038945855Z" level=info msg="StartContainer for \"00c4e8ead5f63bee846fdda87ae7840766b62f49e606185a9780dd5fc5ea211f\" returns successfully" Jan 15 12:53:53.174471 kubelet[3479]: I0115 12:53:53.173561 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b49cb6669-qcv6d" podStartSLOduration=27.673827064 podStartE2EDuration="31.173500816s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:46.575010332 +0000 UTC m=+60.821489077" lastFinishedPulling="2025-01-15 12:53:50.074684084 +0000 UTC m=+64.321162829" observedRunningTime="2025-01-15 12:53:51.169432658 +0000 UTC m=+65.415911403" watchObservedRunningTime="2025-01-15 12:53:53.173500816 +0000 UTC m=+67.419979561" Jan 15 12:53:53.191299 kubelet[3479]: I0115 12:53:53.188931 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-666f9cb696-hwrgm" podStartSLOduration=25.509830909 podStartE2EDuration="31.188882309s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:47.232169482 +0000 UTC m=+61.478648227" lastFinishedPulling="2025-01-15 12:53:52.911220922 +0000 UTC m=+67.157699627" observedRunningTime="2025-01-15 12:53:53.18795923 +0000 UTC m=+67.434437975" watchObservedRunningTime="2025-01-15 12:53:53.188882309 +0000 UTC m=+67.435361054" Jan 15 12:53:54.157640 containerd[1810]: time="2025-01-15T12:53:54.157582667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:54.159923 containerd[1810]: time="2025-01-15T12:53:54.159887783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 15 12:53:54.164558 containerd[1810]: time="2025-01-15T12:53:54.164503175Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:54.169948 containerd[1810]: time="2025-01-15T12:53:54.169817925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:54.171739 containerd[1810]: time="2025-01-15T12:53:54.171628122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.259600241s" Jan 15 12:53:54.171739 containerd[1810]: time="2025-01-15T12:53:54.171662482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 15 12:53:54.174482 containerd[1810]: time="2025-01-15T12:53:54.173368759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:53:54.175220 containerd[1810]: time="2025-01-15T12:53:54.174783636Z" level=info msg="CreateContainer within sandbox \"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 12:53:54.230334 containerd[1810]: time="2025-01-15T12:53:54.230179538Z" level=info msg="CreateContainer within sandbox \"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c41b2f9b314bbf11596289819c37674dbd699b11172845c0bd799252ebb907b2\"" Jan 15 12:53:54.232572 containerd[1810]: time="2025-01-15T12:53:54.232427214Z" level=info msg="StartContainer for \"c41b2f9b314bbf11596289819c37674dbd699b11172845c0bd799252ebb907b2\"" Jan 15 12:53:54.312586 containerd[1810]: time="2025-01-15T12:53:54.312538911Z" level=info msg="StartContainer for \"c41b2f9b314bbf11596289819c37674dbd699b11172845c0bd799252ebb907b2\" returns successfully" Jan 15 12:53:54.483510 containerd[1810]: time="2025-01-15T12:53:54.483035328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:53:54.486582 containerd[1810]: time="2025-01-15T12:53:54.486521882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 12:53:54.488966 containerd[1810]: time="2025-01-15T12:53:54.488876158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 315.478119ms" Jan 15 12:53:54.488966 containerd[1810]: time="2025-01-15T12:53:54.488932558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:53:54.491475 containerd[1810]: time="2025-01-15T12:53:54.491431753Z" level=info msg="CreateContainer within sandbox \"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:53:54.523894 containerd[1810]: time="2025-01-15T12:53:54.523846576Z" level=info msg="CreateContainer within sandbox \"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57b2a984689c752840da713dc0790692edf8a644b60df3822f9f6d6932e48388\"" Jan 15 12:53:54.524786 containerd[1810]: time="2025-01-15T12:53:54.524746614Z" level=info msg="StartContainer for \"57b2a984689c752840da713dc0790692edf8a644b60df3822f9f6d6932e48388\"" Jan 15 12:53:54.577978 containerd[1810]: time="2025-01-15T12:53:54.577929400Z" level=info msg="StartContainer for \"57b2a984689c752840da713dc0790692edf8a644b60df3822f9f6d6932e48388\" returns successfully" Jan 15 12:53:54.991382 kubelet[3479]: I0115 12:53:54.991165 3479 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 12:53:54.991382 kubelet[3479]: I0115 12:53:54.991211 3479 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 12:53:55.195209 kubelet[3479]: I0115 12:53:55.190688 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-z9tjc" podStartSLOduration=25.592387845 podStartE2EDuration="33.190642591s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:46.574341534 +0000 UTC m=+60.820820239" lastFinishedPulling="2025-01-15 12:53:54.17259624 +0000 UTC m=+68.419074985" observedRunningTime="2025-01-15 12:53:55.182879924 +0000 UTC m=+69.429358669" watchObservedRunningTime="2025-01-15 12:53:55.190642591 +0000 UTC m=+69.437121336" Jan 15 12:53:55.208394 kubelet[3479]: I0115 12:53:55.208350 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b49cb6669-sq7jw" podStartSLOduration=26.94068948 podStartE2EDuration="33.207421441s" podCreationTimestamp="2025-01-15 12:53:22 +0000 UTC" firstStartedPulling="2025-01-15 12:53:48.222614876 +0000 UTC m=+62.469093581" lastFinishedPulling="2025-01-15 12:53:54.489346797 +0000 UTC m=+68.735825542" observedRunningTime="2025-01-15 12:53:55.205611724 +0000 UTC m=+69.452090469" watchObservedRunningTime="2025-01-15 12:53:55.207421441 +0000 UTC m=+69.453900226" Jan 15 12:53:56.177972 kubelet[3479]: I0115 12:53:56.177260 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:54:20.016960 kubelet[3479]: I0115 12:54:20.016754 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:54:23.313566 kubelet[3479]: I0115 12:54:23.313336 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:54:39.726778 systemd[1]: run-containerd-runc-k8s.io-00c4e8ead5f63bee846fdda87ae7840766b62f49e606185a9780dd5fc5ea211f-runc.LT0MuD.mount: Deactivated successfully. Jan 15 12:54:41.632438 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:48538.service - OpenSSH per-connection server daemon (10.200.16.10:48538). Jan 15 12:54:42.065720 sshd[5864]: Accepted publickey for core from 10.200.16.10 port 48538 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:42.067941 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:42.073452 systemd-logind[1784]: New session 10 of user core. Jan 15 12:54:42.079794 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 12:54:42.456813 sshd[5864]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:42.459917 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:48538.service: Deactivated successfully. Jan 15 12:54:42.460063 systemd-logind[1784]: Session 10 logged out. Waiting for processes to exit. Jan 15 12:54:42.464160 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 12:54:42.466596 systemd-logind[1784]: Removed session 10. Jan 15 12:54:45.880988 containerd[1810]: time="2025-01-15T12:54:45.880936764Z" level=info msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.939 [WARNING][5892] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d4339fa4-8d81-4248-bba4-5dd6857b8e52", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5", Pod:"coredns-76f75df574-ztv7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535a8b3f439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.939 [INFO][5892] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.941 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" iface="eth0" netns="" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.941 [INFO][5892] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.941 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.985 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.985 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.986 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.994 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.994 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.996 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.001187 containerd[1810]: 2025-01-15 12:54:45.999 [INFO][5892] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.001187 containerd[1810]: time="2025-01-15T12:54:46.001024297Z" level=info msg="TearDown network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" successfully" Jan 15 12:54:46.001187 containerd[1810]: time="2025-01-15T12:54:46.001050257Z" level=info msg="StopPodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" returns successfully" Jan 15 12:54:46.002523 containerd[1810]: time="2025-01-15T12:54:46.002450374Z" level=info msg="RemovePodSandbox for \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" Jan 15 12:54:46.005541 containerd[1810]: time="2025-01-15T12:54:46.005496448Z" level=info msg="Forcibly stopping sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\"" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.079 [WARNING][5916] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d4339fa4-8d81-4248-bba4-5dd6857b8e52", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"32ecb885e9ba9f706558a6303ee8dd7467a3ae1e11b7d9bcac6a7d8f9497dfc5", Pod:"coredns-76f75df574-ztv7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535a8b3f439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.080 [INFO][5916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.080 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" iface="eth0" netns="" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.080 [INFO][5916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.080 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.111 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.111 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.111 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.121 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.121 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" HandleID="k8s-pod-network.048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--ztv7l-eth0" Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.122 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.125584 containerd[1810]: 2025-01-15 12:54:46.124 [INFO][5916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41" Jan 15 12:54:46.126028 containerd[1810]: time="2025-01-15T12:54:46.125625861Z" level=info msg="TearDown network for sandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" successfully" Jan 15 12:54:46.135221 containerd[1810]: time="2025-01-15T12:54:46.135097363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:46.135221 containerd[1810]: time="2025-01-15T12:54:46.135169963Z" level=info msg="RemovePodSandbox \"048aa3c33e94796e8dc4dad4adeb9379652392bc6965960cb06aeae5d7535b41\" returns successfully" Jan 15 12:54:46.136026 containerd[1810]: time="2025-01-15T12:54:46.135778602Z" level=info msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.170 [WARNING][5942] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224", Pod:"csi-node-driver-z9tjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c0f19a9e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.170 [INFO][5942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.170 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" iface="eth0" netns="" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.170 [INFO][5942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.170 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.190 [INFO][5948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.190 [INFO][5948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.190 [INFO][5948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.198 [WARNING][5948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.198 [INFO][5948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.199 [INFO][5948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.202748 containerd[1810]: 2025-01-15 12:54:46.201 [INFO][5942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.203504 containerd[1810]: time="2025-01-15T12:54:46.203266354Z" level=info msg="TearDown network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" successfully" Jan 15 12:54:46.203504 containerd[1810]: time="2025-01-15T12:54:46.203299114Z" level=info msg="StopPodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" returns successfully" Jan 15 12:54:46.203826 containerd[1810]: time="2025-01-15T12:54:46.203793913Z" level=info msg="RemovePodSandbox for \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" Jan 15 12:54:46.203870 containerd[1810]: time="2025-01-15T12:54:46.203839793Z" level=info msg="Forcibly stopping sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\"" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.238 [WARNING][5967] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e31c47b-ec08-47f1-903e-14f9c9ca8a9b", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"f593d34991562675ebe410e6f21ea6e26338b47ee2f383da006ed2565bdf3224", Pod:"csi-node-driver-z9tjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c0f19a9e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.239 [INFO][5967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.239 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" iface="eth0" netns="" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.239 [INFO][5967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.239 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.260 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.260 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.260 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.268 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.268 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" HandleID="k8s-pod-network.c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Workload="ci--4081.3.0--a--b64d8040ed-k8s-csi--node--driver--z9tjc-eth0" Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.269 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.273952 containerd[1810]: 2025-01-15 12:54:46.271 [INFO][5967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279" Jan 15 12:54:46.275435 containerd[1810]: time="2025-01-15T12:54:46.273937781Z" level=info msg="TearDown network for sandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" successfully" Jan 15 12:54:46.286107 containerd[1810]: time="2025-01-15T12:54:46.286055358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:46.286107 containerd[1810]: time="2025-01-15T12:54:46.286120238Z" level=info msg="RemovePodSandbox \"c7ce76c3606572387afcff182848ca8c6a26988bac7633441b24b94e23b55279\" returns successfully" Jan 15 12:54:46.287118 containerd[1810]: time="2025-01-15T12:54:46.286692277Z" level=info msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.338 [WARNING][5991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"853a9ed3-353b-4e0d-99d5-673d9014d6e9", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b", Pod:"calico-apiserver-6b49cb6669-sq7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88f9a69c6b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.338 [INFO][5991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.338 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" iface="eth0" netns="" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.338 [INFO][5991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.338 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.360 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.360 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.360 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.368 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.368 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.370 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.372545 containerd[1810]: 2025-01-15 12:54:46.371 [INFO][5991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.373128 containerd[1810]: time="2025-01-15T12:54:46.372999393Z" level=info msg="TearDown network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" successfully" Jan 15 12:54:46.373128 containerd[1810]: time="2025-01-15T12:54:46.373028433Z" level=info msg="StopPodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" returns successfully" Jan 15 12:54:46.373866 containerd[1810]: time="2025-01-15T12:54:46.373596432Z" level=info msg="RemovePodSandbox for \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" Jan 15 12:54:46.373866 containerd[1810]: time="2025-01-15T12:54:46.373629032Z" level=info msg="Forcibly stopping sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\"" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.411 [WARNING][6017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"853a9ed3-353b-4e0d-99d5-673d9014d6e9", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"841706ef613d705dd7e464eac3a25b72c33a324c20d9269ff2a4176f4a1a2d5b", Pod:"calico-apiserver-6b49cb6669-sq7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88f9a69c6b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.411 [INFO][6017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.411 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" iface="eth0" netns="" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.411 [INFO][6017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.411 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.431 [INFO][6024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.432 [INFO][6024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.432 [INFO][6024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.440 [WARNING][6024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.440 [INFO][6024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" HandleID="k8s-pod-network.73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--sq7jw-eth0" Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.444 [INFO][6024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.448441 containerd[1810]: 2025-01-15 12:54:46.446 [INFO][6017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b" Jan 15 12:54:46.449448 containerd[1810]: time="2025-01-15T12:54:46.448892570Z" level=info msg="TearDown network for sandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" successfully" Jan 15 12:54:46.461777 containerd[1810]: time="2025-01-15T12:54:46.461660106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:46.461777 containerd[1810]: time="2025-01-15T12:54:46.461734666Z" level=info msg="RemovePodSandbox \"73288eccf21cbb847ea1bed42570dc23d082de8a52e1703d642d68f4f332bd5b\" returns successfully" Jan 15 12:54:46.462463 containerd[1810]: time="2025-01-15T12:54:46.462140185Z" level=info msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.503 [WARNING][6042] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34cd598-6eb1-42df-917d-effcdfc5a29b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034", Pod:"calico-apiserver-6b49cb6669-qcv6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd56c571cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.504 [INFO][6042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.504 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" iface="eth0" netns="" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.504 [INFO][6042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.504 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.527 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.527 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.527 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.535 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.535 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.537 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.540415 containerd[1810]: 2025-01-15 12:54:46.538 [INFO][6042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.540858 containerd[1810]: time="2025-01-15T12:54:46.540465637Z" level=info msg="TearDown network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" successfully" Jan 15 12:54:46.540858 containerd[1810]: time="2025-01-15T12:54:46.540491077Z" level=info msg="StopPodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" returns successfully" Jan 15 12:54:46.541259 containerd[1810]: time="2025-01-15T12:54:46.541223195Z" level=info msg="RemovePodSandbox for \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" Jan 15 12:54:46.541259 containerd[1810]: time="2025-01-15T12:54:46.541258835Z" level=info msg="Forcibly stopping sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\"" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.586 [WARNING][6066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0", GenerateName:"calico-apiserver-6b49cb6669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34cd598-6eb1-42df-917d-effcdfc5a29b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b49cb6669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"4b023d2e406846123fe76ecfa06615f3e86dd071d9f0db19cf32a0ec623d4034", Pod:"calico-apiserver-6b49cb6669-qcv6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd56c571cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.587 [INFO][6066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.587 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" iface="eth0" netns="" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.587 [INFO][6066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.587 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.608 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.608 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.608 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.619 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.619 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" HandleID="k8s-pod-network.4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--apiserver--6b49cb6669--qcv6d-eth0" Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.621 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.623754 containerd[1810]: 2025-01-15 12:54:46.622 [INFO][6066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b" Jan 15 12:54:46.624673 containerd[1810]: time="2025-01-15T12:54:46.624231959Z" level=info msg="TearDown network for sandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" successfully" Jan 15 12:54:46.632746 containerd[1810]: time="2025-01-15T12:54:46.632693783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:46.633053 containerd[1810]: time="2025-01-15T12:54:46.632950382Z" level=info msg="RemovePodSandbox \"4b3ff0d5d1e1a40511a3595a0f654e46f86a0dfb654858351062a3764a0bc99b\" returns successfully" Jan 15 12:54:46.633577 containerd[1810]: time="2025-01-15T12:54:46.633542981Z" level=info msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.671 [WARNING][6091] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0", GenerateName:"calico-kube-controllers-666f9cb696-", Namespace:"calico-system", SelfLink:"", UID:"9197b20e-feec-468b-98f4-a4ecccedcf24", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f9cb696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af", Pod:"calico-kube-controllers-666f9cb696-hwrgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1752097df50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.672 [INFO][6091] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.672 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" iface="eth0" netns="" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.672 [INFO][6091] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.672 [INFO][6091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.690 [INFO][6097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.690 [INFO][6097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.690 [INFO][6097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.698 [WARNING][6097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.698 [INFO][6097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.700 [INFO][6097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.702866 containerd[1810]: 2025-01-15 12:54:46.701 [INFO][6091] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.702866 containerd[1810]: time="2025-01-15T12:54:46.702669770Z" level=info msg="TearDown network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" successfully" Jan 15 12:54:46.702866 containerd[1810]: time="2025-01-15T12:54:46.702694170Z" level=info msg="StopPodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" returns successfully" Jan 15 12:54:46.707404 containerd[1810]: time="2025-01-15T12:54:46.703382449Z" level=info msg="RemovePodSandbox for \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" Jan 15 12:54:46.707507 containerd[1810]: time="2025-01-15T12:54:46.707411961Z" level=info msg="Forcibly stopping sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\"" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.765 [WARNING][6115] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0", GenerateName:"calico-kube-controllers-666f9cb696-", Namespace:"calico-system", SelfLink:"", UID:"9197b20e-feec-468b-98f4-a4ecccedcf24", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f9cb696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"3321bb6943b4789168573ea03d633e8b1be28059bd30717b96dc72f3e6c5f8af", Pod:"calico-kube-controllers-666f9cb696-hwrgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1752097df50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.765 [INFO][6115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.765 [INFO][6115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" iface="eth0" netns="" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.765 [INFO][6115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.765 [INFO][6115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.788 [INFO][6121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.788 [INFO][6121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.788 [INFO][6121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.800 [WARNING][6121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.800 [INFO][6121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" HandleID="k8s-pod-network.8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Workload="ci--4081.3.0--a--b64d8040ed-k8s-calico--kube--controllers--666f9cb696--hwrgm-eth0" Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.802 [INFO][6121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.805441 containerd[1810]: 2025-01-15 12:54:46.803 [INFO][6115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c" Jan 15 12:54:46.805441 containerd[1810]: time="2025-01-15T12:54:46.805363416Z" level=info msg="TearDown network for sandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" successfully" Jan 15 12:54:46.819840 containerd[1810]: time="2025-01-15T12:54:46.819782469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:46.820148 containerd[1810]: time="2025-01-15T12:54:46.819875429Z" level=info msg="RemovePodSandbox \"8fb6cb16fa0897a9a9215399988b2a4d39a51de393bc969a3f008fb35194330c\" returns successfully" Jan 15 12:54:46.820673 containerd[1810]: time="2025-01-15T12:54:46.820382908Z" level=info msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.861 [WARNING][6139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"23ead334-e963-4418-9ca2-7ff1cba5daa6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1", Pod:"coredns-76f75df574-frkvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f639211327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.862 [INFO][6139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.862 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" iface="eth0" netns="" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.862 [INFO][6139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.862 [INFO][6139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.888 [INFO][6145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.889 [INFO][6145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.889 [INFO][6145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.913 [WARNING][6145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.913 [INFO][6145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.919 [INFO][6145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:46.925062 containerd[1810]: 2025-01-15 12:54:46.922 [INFO][6139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:46.926957 containerd[1810]: time="2025-01-15T12:54:46.926295668Z" level=info msg="TearDown network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" successfully" Jan 15 12:54:46.926957 containerd[1810]: time="2025-01-15T12:54:46.926327467Z" level=info msg="StopPodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" returns successfully" Jan 15 12:54:46.930273 containerd[1810]: time="2025-01-15T12:54:46.928251864Z" level=info msg="RemovePodSandbox for \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" Jan 15 12:54:46.930273 containerd[1810]: time="2025-01-15T12:54:46.928285344Z" level=info msg="Forcibly stopping sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\"" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.005 [WARNING][6164] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"23ead334-e963-4418-9ca2-7ff1cba5daa6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b64d8040ed", ContainerID:"39c41bbacfc957183a0eadb8e28372b3b27b3f83a7331c7d96cd1556072465f1", Pod:"coredns-76f75df574-frkvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f639211327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.006 [INFO][6164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.006 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" iface="eth0" netns="" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.006 [INFO][6164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.006 [INFO][6164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.027 [INFO][6170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.027 [INFO][6170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.027 [INFO][6170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.035 [WARNING][6170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.035 [INFO][6170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" HandleID="k8s-pod-network.bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Workload="ci--4081.3.0--a--b64d8040ed-k8s-coredns--76f75df574--frkvk-eth0" Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.036 [INFO][6170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:54:47.040033 containerd[1810]: 2025-01-15 12:54:47.038 [INFO][6164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873" Jan 15 12:54:47.040033 containerd[1810]: time="2025-01-15T12:54:47.039520493Z" level=info msg="TearDown network for sandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" successfully" Jan 15 12:54:47.056057 containerd[1810]: time="2025-01-15T12:54:47.055867863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:54:47.056057 containerd[1810]: time="2025-01-15T12:54:47.055938902Z" level=info msg="RemovePodSandbox \"bfcf738bc0b5d828348b8b553bde65ff40fac68fc344085e70484c9acdcba873\" returns successfully" Jan 15 12:54:47.532449 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:37926.service - OpenSSH per-connection server daemon (10.200.16.10:37926). Jan 15 12:54:47.963828 sshd[6176]: Accepted publickey for core from 10.200.16.10 port 37926 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:47.965515 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:47.969866 systemd-logind[1784]: New session 11 of user core. Jan 15 12:54:47.975012 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 12:54:48.347159 sshd[6176]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:48.351423 systemd-logind[1784]: Session 11 logged out. Waiting for processes to exit. Jan 15 12:54:48.352008 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:37926.service: Deactivated successfully. Jan 15 12:54:48.354937 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 12:54:48.355985 systemd-logind[1784]: Removed session 11. Jan 15 12:54:53.416440 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:37940.service - OpenSSH per-connection server daemon (10.200.16.10:37940). Jan 15 12:54:53.830335 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 37940 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:53.831895 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:53.835994 systemd-logind[1784]: New session 12 of user core. Jan 15 12:54:53.841588 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 12:54:54.205418 sshd[6213]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:54.208458 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:37940.service: Deactivated successfully. Jan 15 12:54:54.208509 systemd-logind[1784]: Session 12 logged out. Waiting for processes to exit. Jan 15 12:54:54.211883 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 12:54:54.214034 systemd-logind[1784]: Removed session 12. Jan 15 12:54:54.284460 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:37944.service - OpenSSH per-connection server daemon (10.200.16.10:37944). Jan 15 12:54:54.717833 sshd[6228]: Accepted publickey for core from 10.200.16.10 port 37944 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:54.719332 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:54.723242 systemd-logind[1784]: New session 13 of user core. Jan 15 12:54:54.726435 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 12:54:55.131166 sshd[6228]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:55.135310 systemd-logind[1784]: Session 13 logged out. Waiting for processes to exit. Jan 15 12:54:55.135840 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:37944.service: Deactivated successfully. Jan 15 12:54:55.138632 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 12:54:55.140408 systemd-logind[1784]: Removed session 13. Jan 15 12:54:55.200460 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:37948.service - OpenSSH per-connection server daemon (10.200.16.10:37948). Jan 15 12:54:55.613883 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 37948 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:55.616256 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:55.621298 systemd-logind[1784]: New session 14 of user core. Jan 15 12:54:55.626453 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 12:54:55.989117 sshd[6240]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:55.992392 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:37948.service: Deactivated successfully. Jan 15 12:54:55.996272 systemd-logind[1784]: Session 14 logged out. Waiting for processes to exit. Jan 15 12:54:55.996550 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 12:54:55.998369 systemd-logind[1784]: Removed session 14. Jan 15 12:55:01.069477 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:49274.service - OpenSSH per-connection server daemon (10.200.16.10:49274). Jan 15 12:55:01.501676 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 49274 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:01.504731 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:01.510091 systemd-logind[1784]: New session 15 of user core. Jan 15 12:55:01.515545 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 12:55:01.887439 sshd[6259]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:01.890570 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:49274.service: Deactivated successfully. Jan 15 12:55:01.895885 systemd-logind[1784]: Session 15 logged out. Waiting for processes to exit. Jan 15 12:55:01.897102 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 12:55:01.899056 systemd-logind[1784]: Removed session 15. Jan 15 12:55:06.957467 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:45260.service - OpenSSH per-connection server daemon (10.200.16.10:45260). Jan 15 12:55:07.372974 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 45260 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:07.374222 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:07.378717 systemd-logind[1784]: New session 16 of user core. Jan 15 12:55:07.388522 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 12:55:07.753281 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:07.756258 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:45260.service: Deactivated successfully. Jan 15 12:55:07.760923 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 12:55:07.761455 systemd-logind[1784]: Session 16 logged out. Waiting for processes to exit. Jan 15 12:55:07.762768 systemd-logind[1784]: Removed session 16. Jan 15 12:55:12.833427 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:45270.service - OpenSSH per-connection server daemon (10.200.16.10:45270). Jan 15 12:55:13.271675 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 45270 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:13.273228 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:13.279623 systemd-logind[1784]: New session 17 of user core. Jan 15 12:55:13.285607 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 12:55:13.668415 sshd[6314]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:13.671556 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:45270.service: Deactivated successfully. Jan 15 12:55:13.674381 systemd-logind[1784]: Session 17 logged out. Waiting for processes to exit. Jan 15 12:55:13.674619 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 12:55:13.676636 systemd-logind[1784]: Removed session 17. Jan 15 12:55:18.746551 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:49538.service - OpenSSH per-connection server daemon (10.200.16.10:49538). Jan 15 12:55:19.193029 sshd[6345]: Accepted publickey for core from 10.200.16.10 port 49538 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:19.194514 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:19.199280 systemd-logind[1784]: New session 18 of user core. Jan 15 12:55:19.205475 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 12:55:19.578073 sshd[6345]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:19.583731 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:49538.service: Deactivated successfully. Jan 15 12:55:19.587437 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 12:55:19.588388 systemd-logind[1784]: Session 18 logged out. Waiting for processes to exit. Jan 15 12:55:19.589781 systemd-logind[1784]: Removed session 18. Jan 15 12:55:19.651512 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:49548.service - OpenSSH per-connection server daemon (10.200.16.10:49548). Jan 15 12:55:20.066794 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 49548 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:20.068296 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:20.073310 systemd-logind[1784]: New session 19 of user core. Jan 15 12:55:20.082577 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 12:55:20.550924 sshd[6359]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:20.554224 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:49548.service: Deactivated successfully. Jan 15 12:55:20.557993 systemd-logind[1784]: Session 19 logged out. Waiting for processes to exit. Jan 15 12:55:20.558752 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 12:55:20.559875 systemd-logind[1784]: Removed session 19. Jan 15 12:55:20.638569 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:49556.service - OpenSSH per-connection server daemon (10.200.16.10:49556). Jan 15 12:55:21.087061 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 49556 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:21.088661 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:21.093500 systemd-logind[1784]: New session 20 of user core. Jan 15 12:55:21.098546 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 12:55:22.917556 sshd[6371]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:22.921518 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:49556.service: Deactivated successfully. Jan 15 12:55:22.924871 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 12:55:22.925227 systemd-logind[1784]: Session 20 logged out. Waiting for processes to exit. Jan 15 12:55:22.926725 systemd-logind[1784]: Removed session 20. Jan 15 12:55:22.993493 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:49570.service - OpenSSH per-connection server daemon (10.200.16.10:49570). Jan 15 12:55:23.075950 systemd[1]: run-containerd-runc-k8s.io-0bbcc4266aa31c35cdadfb8ac74a1fbb3f19011b16714802a820fd84b9d2aea8-runc.UKaJNk.mount: Deactivated successfully. Jan 15 12:55:23.426477 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 49570 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:23.428513 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:23.433473 systemd-logind[1784]: New session 21 of user core. Jan 15 12:55:23.441565 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 12:55:23.934391 sshd[6391]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:23.936993 systemd-logind[1784]: Session 21 logged out. Waiting for processes to exit. Jan 15 12:55:23.937144 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:49570.service: Deactivated successfully. Jan 15 12:55:23.941103 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 12:55:23.943382 systemd-logind[1784]: Removed session 21. Jan 15 12:55:24.009503 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:49576.service - OpenSSH per-connection server daemon (10.200.16.10:49576). Jan 15 12:55:24.455837 sshd[6425]: Accepted publickey for core from 10.200.16.10 port 49576 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:24.457256 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:24.461088 systemd-logind[1784]: New session 22 of user core. Jan 15 12:55:24.465613 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 12:55:24.834414 sshd[6425]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:24.837293 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:49576.service: Deactivated successfully. Jan 15 12:55:24.841622 systemd-logind[1784]: Session 22 logged out. Waiting for processes to exit. Jan 15 12:55:24.842126 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 12:55:24.843511 systemd-logind[1784]: Removed session 22. Jan 15 12:55:29.919475 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:58836.service - OpenSSH per-connection server daemon (10.200.16.10:58836). Jan 15 12:55:30.389847 sshd[6442]: Accepted publickey for core from 10.200.16.10 port 58836 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:30.391485 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:30.395472 systemd-logind[1784]: New session 23 of user core. Jan 15 12:55:30.399518 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 12:55:30.839059 sshd[6442]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:30.845971 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:58836.service: Deactivated successfully. Jan 15 12:55:30.852587 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 12:55:30.855380 systemd-logind[1784]: Session 23 logged out. Waiting for processes to exit. Jan 15 12:55:30.857371 systemd-logind[1784]: Removed session 23. Jan 15 12:55:32.715557 systemd[1]: run-containerd-runc-k8s.io-00c4e8ead5f63bee846fdda87ae7840766b62f49e606185a9780dd5fc5ea211f-runc.JNVlYu.mount: Deactivated successfully. Jan 15 12:55:35.914456 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:51094.service - OpenSSH per-connection server daemon (10.200.16.10:51094). Jan 15 12:55:36.348317 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 51094 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:36.349672 sshd[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:36.353986 systemd-logind[1784]: New session 24 of user core. Jan 15 12:55:36.357362 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 12:55:36.744048 sshd[6477]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:36.747951 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:51094.service: Deactivated successfully. Jan 15 12:55:36.751303 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 12:55:36.752285 systemd-logind[1784]: Session 24 logged out. Waiting for processes to exit. Jan 15 12:55:36.753428 systemd-logind[1784]: Removed session 24. Jan 15 12:55:41.832475 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:51096.service - OpenSSH per-connection server daemon (10.200.16.10:51096). Jan 15 12:55:42.296659 sshd[6511]: Accepted publickey for core from 10.200.16.10 port 51096 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:42.298022 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:42.302309 systemd-logind[1784]: New session 25 of user core. Jan 15 12:55:42.307657 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 12:55:42.690186 sshd[6511]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:42.693121 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:51096.service: Deactivated successfully. Jan 15 12:55:42.698357 systemd-logind[1784]: Session 25 logged out. Waiting for processes to exit. Jan 15 12:55:42.699374 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 12:55:42.700691 systemd-logind[1784]: Removed session 25. Jan 15 12:55:47.766481 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:57756.service - OpenSSH per-connection server daemon (10.200.16.10:57756). Jan 15 12:55:48.197986 sshd[6528]: Accepted publickey for core from 10.200.16.10 port 57756 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:48.199543 sshd[6528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:48.205100 systemd-logind[1784]: New session 26 of user core. Jan 15 12:55:48.211788 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 12:55:48.589965 sshd[6528]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:48.594799 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:57756.service: Deactivated successfully. Jan 15 12:55:48.598909 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 12:55:48.600297 systemd-logind[1784]: Session 26 logged out. Waiting for processes to exit. Jan 15 12:55:48.601219 systemd-logind[1784]: Removed session 26. Jan 15 12:55:53.664464 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:57760.service - OpenSSH per-connection server daemon (10.200.16.10:57760). Jan 15 12:55:54.096991 sshd[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:55:54.101424 systemd-logind[1784]: New session 27 of user core. Jan 15 12:55:55.151404 sshd[6567]: Accepted publickey for core from 10.200.16.10 port 57760 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:55:54.480066 sshd[6567]: pam_unix(sshd:session): session closed for user core Jan 15 12:55:54.103462 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 12:55:54.483026 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:57760.service: Deactivated successfully. Jan 15 12:55:54.485965 systemd-logind[1784]: Session 27 logged out. Waiting for processes to exit. Jan 15 12:55:54.486262 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 12:55:54.488888 systemd-logind[1784]: Removed session 27. Jan 15 12:55:59.562425 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:60986.service - OpenSSH per-connection server daemon (10.200.16.10:60986). Jan 15 12:56:00.027055 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 60986 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:56:00.028451 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:56:00.032435 systemd-logind[1784]: New session 28 of user core. Jan 15 12:56:00.043552 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 12:56:00.428499 sshd[6582]: pam_unix(sshd:session): session closed for user core Jan 15 12:56:00.432289 systemd-logind[1784]: Session 28 logged out. Waiting for processes to exit. Jan 15 12:56:00.433578 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:60986.service: Deactivated successfully. Jan 15 12:56:00.438862 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 12:56:00.442401 systemd-logind[1784]: Removed session 28.